Rust Axum Production Coding (E01 - Rust Web App Production Coding)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this episode we are going to do some production cutting we are going to start with what we did in the rust accentful course and scale it up to a production web application code base we will have nine chapters from tracing config and model layer with postgresql and then we will cut password encryption and secure web token and end with a json-based RPC you can find the GitHub repo and more information at asamab.dev rust web app all chapters have their corresponding git commits and we will also have key visualizations of our code design and progress this is the beginning of the Ross web app production blueprint episode series is going to be dense but pretty cool and by the way big thanks to Crab Nebula for the sponsorship and also I'm trying to go full time on creating rust educational materials for production coding so any help is big help okay with that let's get started okay so our first chapter is going to be the Baseline so what I did is I took the rust accent full course that we have and I trim it down and reorganize a little bit to cut to make it scalable from the get-go so it's really bare bone so what we have on this first command here which is going to be the zero one Baseline is this source code which is going to map to this architecture the first one that we have from the top is a web the web layer that is all the files that we have more or less from the rest action course I moved the rod static inside the web module and then in the web module I have the own error here which is the only error that knows about the rust accent into response and so on so one thing which is important when you structure your code is you don't want to have one big error for everything you want usually to have the main modules having their own errors and then you can bubble them up as it makes sense then we have our context and here we have our own error as well and it's going to be a relatively simple here context for this example but in future episode that is going to grow quite a bit and the context is the information that you get from the request or from anything that identified a user and a request context regardless of the web framework you are using so that is a nice decoupling and then we have our model layer and right now it's very basic and I reorganize this such as we're going to split the model managers which is going to be for the state like the database pool and the model controllers that we're going to implement which is going to have the crowds and other method pair entities so that is going to be a relatively scalable model and the model layer will have a store right now it's going to be very lightweight but eventually you can have store libraries or utilities that you might find and our model layer will use SQL X and even SQL B which is a little lightweight SQL Builder based on SQL X to talk to the database and then I move the log logic into his own module and right now there is no error but eventually that might grow into more code and have its own error but right now it's relatively simple so we keep it like that and then we have our main error and our main.rs if we go to our main.rs I did some cleanup so we have our model manager now so we initialize the model managers which will be our app state in a way that will hold the database pool connection and then we will give the model manager to our model controllers we're going to see that later then we are going to have our Rod RPC and that will be for later so I remove all of the apis because we are going to use a Json RPC scheme here and then we have our runs all that Define our middlewares and our routers and in fact if we take it out we can visualize a whole request flow so in exam you have two kind of constructs you have the middlewares and the handlers so the static here is what we have as a fallback services on the servd so that is a Handler that we have there so that will take care of the static file serving and we are using the servd library for that then from the rust axon full course we get our API login we'll do log off later and that is what we have over there and right now is a fake login we don't have passform encryptions and we don't have anything it's just a hard-coded one very simple one and that is what we are going to do in this episode with full password encryption and and secure web token and then what we have is the API RPC which was the API rest that we have before and that is what I commented here so this code is not active this is no implementation and that is what we're going to do in the chapter 9. now one thing that we carry over as well is our response map which is over there and that is very important because it does the remapping of the error messages to make sure that we send only the minimum information to the client as well as our request line log which is critical when you do a production application you want to have one log line per request that is very rich in context and we also have our middleware which is our CTX resolve which basically resolve the CTX for a given request and right now again it's kind of fake resolve but there's a placeholder which also takes the model manager as a state it's important to note that the US resolve will get executed for both path for the API login and log off as well as the API or PC so the auth resolve is infallible meaning that it will not fail it will capture the error and put it in the request flow but it's not going to fail and then we have another middleware which is the auth require which will add to the routes RPC like so and it refers a request if there is an error so the Earth resolve get executed either way and then when it goes through the path login and log off if it fails it doesn't really matter and then if it goes to surpass API RPC then the auth require will fail the request if you didn't succeed now in our case it's not going to be fully needed because our handlers on the API RPC are going to require the CTX so they will fail no matter what but it's always a good idea to have the auth required in front of the routers that you want to ensure they have to be authenticated now also the next thing that I've done is I've moved the quick Dev into the examples kind of works better on Windows I think there were some issues on doing a cargo test and cargo run in parallel and as well as it's a better fit in the examples anyway because it's not really about unique testing it's about just a little helper to just develop fast and we're going to write quite a bit of unit test in this episode so this way everything is very intent driven has the right places from the get-go now one thing I've created as well is a web folder which will hold the index.html so that is just a hello world and we will put this path in the config such as it can be used in a real deployment now if we look inside our quick Dev we're going to see that we have our HTTP test and we are creating our client with our localhost while doing a get request on index.html and then our fake login here with the username password which is demo one welcome now in the readme I put the instruction for the two different terminal so what we can do is take Terminal 1 that will do a cargo watch on the source folder and we'll execute run and so this way we have our server that runs and every time we change the server code it will rebuild and restart and then to test the client side we have the other terminal which is going to do a watch on examples folder and run the example quick Dev if we do that so we can see that we have our get index.html and then if we scroll down we have our response body and then we have our post on the API login that sets of cookies and again this is fake but in this episode that would be the rare thing and then we have a simple response body here a success true now if we look on the left side which is our server and map it to our request flow we can see that our mctx resolve that is the US result over there which is rctx resolve we have the API login Handler which is this Handler and then we can see that we have our middleware response map which allows us to have our request log line so already this code this Baseline is already pretty well set up to scale so that is going to be our Baseline so in the following chapters what we're going to do is we're going to do the full login and log off with password encryption the full us resolve with token validation the RPC layer without auth require and then we are going to augment our response map for our request logging and to do that we are going to build the scalable model layer with model manager and model controllers and their unit test that will talk to a postgres database so it's going to be a pretty big session but a pretty cool one so in the second chapter we are going to add tracing just the basics so for that we're going to go to our main and first in our cargo 2ml we're going to add the tracing Library make sure to add the F Dash filter feature because we're going to add it to configure our filter now in the top of your menu you are just going to add the tracing subscribers like that now in my case I put wizard time because otherwise when I do the video the tracing lines are too long but that is probably something that you don't want in your code because having the time is useful in normal addressing then I have a personal preference where I don't need a Target but that again this is just a personal preference you might display the target the important thing here is to configure it with the event filter environment variable so you need to make sure to import the amp filter from the tracing to privilege library and so the goal now is to set the amount variable now there's a nice little trick in cargo where you can Define your dot cargo config to ml and that is a config for all of the cargo commands and within this file you can have a section for your environment variables so in this case we are going to Define rest log with our rust web app and we're going to have debug so this way you can filter exactly what you want now the rest web app has underscore so if we go to the cargo to ml we see that our name is Russ web app with a dash and the convention here that becomes underscore for the filter another strategy will be to use a dot f and use a DOT AV in the application code to set the environment variable but what I like about this config 2ml is that the amount variables are set outside of the application by the container in a way so in this case the container is Cargo and when we are going to deploy that into a kubernetes the container will be kubernetes and that will be the config map and the secret so this way the apartment variable are set outside and before the application run which is usually what you want in your production application now if we go back to our main.rs and we are going to scroll down into our server we can see that we have our printed in and this is a little best practice I have for my debug prints as usually do this kind of notation with a dash and two greater than and usually I don't commit those so those are temporary prints and before I commit I do a search on this character to make sure that I don't have any printed in like that now in this case I let them in because it was for the example but now this is actually a good candidate to become a info tracing so we're going to import tracing info so we're going to remove those characters and now we can go and do a global search for the printed end with this special characters at the beginning and that would give us all of the printed in that have those characters and we can replace it without debug Trace and then that's it so obviously we're going to have some comprehension errors but they are trivial to fix because it's just about importing the debug now in the middleware res map we have another printed end that didn't have these characters and that is a new line print and so we're going to change it to debug as well now in practice this is not really useful because when you are going to have multiple requests coming at the same time this new line won't have any meaning but as we are going to do our developments right now where our request will be sequential this is going to be a nice little trick now in a production app you might want to remove that that is kind of useless in fact many of the debugs that you have you might strategize to remove some of some are more for early development than anything else so now you can do the same fix for all of the files and so that is very trivial and now another little trick is you can bring your readme and we are going to add Dash watch and we're going to watch the dot cargo folder as well now we can run that in our terminal and we're seeing our info listening and then if you bring your quick div and press save on quick dev then you have the nice tracing display with our empty line okay so that was our warm-up chapter in a way so for our third chapter we're going to do our config layer so first we're going to go to our DOT cargo config2ml and we are going to organize these environment variables the same way that we organize it in kubernetes Fires for example so we're going to have a section for config map and we are going to call our variable service web folder so the conversion is to start our appointment variable with service underscore and use neck case and uppercase for the name of the amount variable now the value of our web folder is going to be relative to cargo2ml in a deployment scenario you might want to have absolute path in your Docker image but that is a little detail so now we can go back to our main and if we go on top my best practice is to have a module code section and the way that I organize it is I have the sub module first then I have the eventual re-exports so that is the best practice that I mentioned before I re-export the error of this module and then I have my imports last but that obviously is a personal preference so what we're going to create is a module config and we're just going to create a single file module so that would be fine in this case because the config module is small enough that it doesn't have to have its own error and you can use the criterios now if we drill to the config file we'll be using the create error and result we are going to use the STD environment and then we're going to create our config struct and for now we're just going to have a section with our web folder so in this case we don't need to prefix with the service underscore the service underscore is just for the environment viable and in the config we can remove them now I'm going to allow the non-neck case because we have screaming snake case in a way or uppercase but that is a personal preference in my case I like to see what come from the config very quickly in my code now we're going to implement our config and the goal is just to load from the apartment and that will return a result of config so it's going to be very simple we're going to have okay config we're going to repeat the section and we're going to have web folder and we could do something like that have VAR and then service web folder and then we could do an unwrap however you don't really want to use unwrap and we don't really want to Cascade back the environment error back to the main error because might not have the right contextual information so what we're going to do is create a function which is going to be get F which will take a name which will be a static string and it will return the craterial results string so inside this function now we can have the Val name and we're going to map the error into our error config missing allotment and then we give the name so now that we have that we need to create it inside our criteria for that we go to our error.rs and we're going to just add this config missing with a static string as simple as that and one thing I like about my errors is I'm trying to put everything as in him early on I can refactor the name later anyway and then having simple properties so now that we have that we can take our vowel and do a get F and now we don't need to have the unwrap anymore and we can put the question mark now what we don't want is we don't want to let that all the time every time someone in the config to have to reload the amount variable so what we're going to do is we're going to implement a function config that will return a static config such as we load the configuring once and the way you put things in static in rest is we can have our static instance and now we can use a once lock which is new inverse 170 so that is part of the standard library now and that is the equivalent of one's cell library now the way it works here is it takes one generic which is going to be the type is going to store so that is going to be our config and then you do a once log new so at the beginning the one slog is going to be empty and so to get the content now you get an instance and you have a function which is get or init and then it takes a closure and you can create your config there and in fact what we're going to do is an exception to our no unwrapped rule which is in this particular case we want to fail very early if we cannot load the config so we're going to do a unwrap or else and then we are going to panic and then we're going to do a while learning and we'll give the cause so the reason why you want to do that is because you don't want to have your application starting without the config being alerted so it's better to fail very early such as you can fix the issue rather than having the application running for a while and then you don't understand what's going on so now if we look at it the closure will return config so the geta in it it will cause a closure and if he has one it will return the static reference of it and otherwise it will store it into the one slog and return the static of config as well now Statics viable inverse are not like const cost is almost like a fine and replace kind of things a static is a full variable which has to have a static Lifetime and in fact in this case we have it inside config but Statics are always Global so we cool for example predicted globally will be exactly the same the only reason why you put it in a function is change the visibility of it now this s stands variable is accessible only within the config function even if easy memory is created globally so both are the same I like to have it this way such as I can just have a name that makes sense within the function and I don't have to worry about other conflicts in the same module so now that we have that we can go back to our main database and one thing that I like to do is to re-export the config like that so this way I can have use crate config just simple that again personal preference but that is something I like to do so we can go to our routes static and now we can replace our const web folder by config we import it and then we do a web folder doing some reformatting and then we don't forget to put the reference because a web folder will be the string we cannot take it out obviously and if we bring our quick depth terminal and we do a run or refresh we can see that we have our response body and then again the cool thing is because we have the wash cargo if we change the amount variable it will restart the server okay so that was our second mizambush and so now we can go to the sales staff here of creating our database so now let's go to chapter 4 which is a database and the library load which is critical for fast development and before we start with the code let's do a quick overview of unit testing and integration testing so when you have your service code for example this is our code you write your unit test for most of the layers you have some sort of unit test a fashion 2 within your code and then you have your integration test that use a real database radius Amazon S3 or whatever xnr Services you are using So today we're not going to write or talk about integration testing but we are going to write quite a bit of unit test which is very important even if at the beginning you write only sum is much better than none so make sure to always start with some unit tests and then you can grow them over time that will usually help you iterate faster and have more robust code now on the unit test side we have two strategies one strategy is for all of these xnr Services we are going to write mock services and in this strategy everything is going to be contained within the same process so when you do unit tests on model for example the model will talk to the DB Mark the cache of this is that now a new application code new model has to know that sometimes it has to talk to the real thing and sometimes it has to talk to the mock so that would Cascade back up more or less probably not all the way up but there's going to be quite a bit of branching that will be happening into your application code so the result of that is that typically this is quite intrusive into your application code the second downside of this approach is that you're actually testing a code path that will never ever be used in production so some of this code paths will be but other won't be and you will have to trust that your integration test will take care of this cut path but the great things about unit tests and moreovers aware they are done in Rust is that they allow you to be very granular into testing private functions and so we don't want to lose these benefits because that is lost when you go to integration testing and also most of the time when things fails is in between your program and the database already the other approach is to have your unit test infrastructure having the real postgres database the release and the next three service and you put that in a local host or a local kubernetes and in S3 we are pretty lucky because we have menu which is a very good mock services for I3 it's much more you can use it for production as well but it really follows the S3 apis so now when the unit test there's a model the model will actually talk to a real postgres database with all of the entry cases and all of the details that matters to our application code an obviously is the same thing for other services so the complexity of the unit test on the service code decreased now is much simpler on this side now on the unit test and Fracture tutorial the complexity is going to be a little bit higher now because you are going to have to take care of initializing and starting all of these services and making sure that now your unit test Arrangement can have these Arrangements but for our style of programming that has a very big net value so this is the approach that we are going with okay enough talking let's start cutting so first what we are going to add to our cargo to ml is going to be SQL X make sure to add the features postgres and uuid and time this is what we're going to use later and then we are going to create our first SQL files so the way it's going to work is that we're going to be very iterative and so every time we're changing a source file on our server the database would get recreated from scratch so the all one will be dropped and the new one will be recreated eventually you want to have Evolution some people are calling it migration I prefer to call it Evolution we are going to talk about that in future episodes but right now it's going to be early on in the development stage where you can Revere all your database from scratch all the time and that would be a good starting point to then after I have a strategy of database evolution So within our SQL folder we are going to create a folder which is going to be Dev initial and the first files that we're going to create and we are going to number them like that with the two number of padding is going to be our zero zero recreate db.sql and if we get into it we're going to be messy Less on this one it's going to be a Brute Force drop Db for the local database here's a little trick in postgres to kick out all of the sessions for a specific username or database name and then we're going to drop the database if it exists and then we are going to drop the user if it exists and then we are going to recreate them with hard-coded passwords so those are just localhost passwords so they don't need to be encrypted or this kind of things but the goal here is to be able to recreate the database redrop it and recreate it all the time such as we can iterate very fast in our data model at the beginning of the development the important Point as well is that we are using App usern appdb we are not directly creating on the postgres default and that is very important to do it early on otherwise later it's kind of a pain to refactor for it so now if we bring back our SQL folder Dev initial we're going to create a new one which is going to be our zero one create schema.sql and this one is a main one and that will be executed as a app user so we're going to have our base schema and right now it's going to be CD simple user ID while using that which is a new kind of new way of doing the big style and one technique that I like to do is for most of my tables if not all I always start the index at a thousand so that is all techniques that I have such as I can reserve the previous IDs for internal tests or route users or system users or whatever I want this is mostly a personal preference and the important point on this one is to make sure that that is a primary key and now we are just going to have the username right now and then we are going to create another table which is going to be our task and that will be the ID same thing and then it will just have a title for now in future episodes we are going to make that much richer but right now we want to show the basics on how to do a scalable code and then we'll add features later and now that we have that we bring back our SQL definition folder and we are going to do another one zero two and it's going to be our devseed DOT SQL and that is just sitting the database with whatever we want for development and right now we're going to keep it very very simple we're just going to insert a new user for demo one and while not even going to create a task we'll do that in our unit test infrastructure so in short here we have three files the first one will be executed by the postgres root user to make sure that we can drop the database and recreate it and then the two other one will be executed as the app user on the appdb now if we go back to main we're going to create a specific module and we are going to prefix it with the underscore and that is going to be our Dev utils and we are going to create it as a module file and then eventually that will be used only for unit test now if we get into it every time we have a mod.rs the thing I like to do again is to have a code region which is modules and I'm defining my sub modules and in this case we are going to define a devdb which is going to be part of our debut years which is going to recreate the database so we're going to get into it and again the goal here is to execute the zero zero required DB with postgres and then execute these two in order with the app user on the appdb and right now what we're going to do is going to be very simple we are going to split on the semicolon and just execute each block so in SQL X I found a way to execute files but it has to be statically named and in our case it's not about database migration or whatever it's just about executing files the way that we would like to execute it so that is why we're going to have to credit manually but it's a good reverse exercise anyway so we're going to use SQL X pool and the passwords and then one thing that I like to do is create a type Alias for pool postgres that is just a personal preference here and we're going to hard code the root postgres URL and which is postgres welcome and the app postgres URL as well which is going to be app user Deb only password so here is important to hard code those password in a way is because we never want by mistake to have this code run and array stage or production or whatever so if you hard cut them then obviously it will fail in case it runs in this apartment and those are localhost passwords so they are not really impacting anything so we'll be using the dev app URL with this user to do the zero zero one and zero two and we'll use a postgres URL with postgres welcome to recreate the app database now we are going to have some cons for SQL files so that is going to be the SQL file that we are going to recreate the database so we're just putting it there as one file and then we're going to have the SQL there which is the directory of this file which contains the other files as well so the zero zero required DB will run with the postgres URL and all of the other files in this folder will use with the app URL so now the first thing that we need to do is obviously creating a new database pool and we'll return a result of DB on SQL X error so we will take the connection URL and then we are going to do the Imports here and the duration is from Standard time now we are going to create our first P exec and that is going to execute one file so it's going to take a database pool and then the file to be executed and it will return a result of void on SQL etcetera we're going to do a first version that is going to be very simplistic and eventually in visual episode I'm going to improve it if I still don't find the library to execute SQL files so let's go into this function we're going to do our debug print we're going to use N4 in this case and then we're going to read the file and we can put a question mark because SQL X error actually has a from standard IO so that's why it works here and then we are going to split the file into this SQL section that we can execute with SQL X and we definitely need to make the split more SQL proof so if you have SQL functions or stop procedure is not going to work very nicely in future episodes I will fix that but right now I just wanted to have something to work nicely and now I do a fast equal in sqls so that will be our strings and I'm just going to do SQL X query execute and rdb so SQL X can obviously execute one query at a time there's a query many but it doesn't really work with a full file content so that is a way I find it to do pretty well and now I can return okay now that we have that we are going to make a function which is init.db and that will return a result and we're going to make the error very generic on this one which is going to be a box of the standard error so I'm not using anyhow because I like anyhow to only be available on my examples and my unit test but for my application code I prefer to be forced to be structured from the get-go so in this case because that is used inside my application code for now I cannot use anyhow so I'm using this workaround now we're going to do our debug print and we're going to create the app database with a postgres user so we're going to have a LED through DB we're going to create our DB pool for root postgres user and now that we have that we can do a p exec the root DB and our SQL required DB and then that will be a weight so that will execute the file and because rudibi has full privilege on the database I don't really like it to have it accessible later in the function so rather to split that into his own function I could put a drop through DB to make sure that nobody can use it below or another technique that I like I just copy that into its own code block so in this case this code is copped with this block and the root DB will be dropped at the end that is another way here that I like it and I don't do that often but sometimes when I don't want the variable to bleed below the function this is a nice way to do it now we are going to get our SQL files so we're going to do a path backed off pass path and we're going to the read dear of the SQL deal and we are going to do a filter map of entry and we do 100. okay and the entry path and then we collect with this way we get a vector of the past buff of this directory we could use a walk DL library or whatever but since again this is very simple we don't have to be that fancy one thing we don't want to forget is to do a path salt and the reason of that is because it don't come up always well sorted and we have all of the name very well sorted by Design such as we can execute it secretly so by doing a password or making sure that everything is sorted well then we are going to execute each of the SQL file so we're going to have our appdb and that will be our FDB URL and we're going to do a four path path and then if we have a pass from the string we're going to replace here and this is a little hack for Windows because in Rust the good news is it supports forward slash but when you get the password a string it will give the Slash the windows way so this way that works on the bus system and then we only want to take the SQL and obviously skip the SQL regret so we're going to say if path and we SQL and if it's not our SQL required we're going to do a p exact FDB on our path and we could have put this condition into our filter map up there as well this always works and then the other thing we need to do is to return the OK void and that's it we have our nadb so it was quite a bit of work but now it's pretty cool because we can recreate the database all the time and the only thing that need to be really fixed over there is this be exact when you are going to add function which is again going to be in the next episode and we'll make it a little bit more versatile so now what we want is our init.db we want it to expose it inside the parent module and we're going to create a function which is going to be initialize our environment for development and it would be called for main but again this is only for the development obviously so we're going to make the function a sync and it's going to be init Dev now one thing we are going to use is a once cell from Tokyo so we are not going to use a new ones log because the new one's lock is not for async and because we have some async function to call we need to have one cell that supports a sink closure and that is what the Tokyo once sale does so now that we have that we can can follow the same pattern and that will be in it I don't call it instance because in this case it's just going to be a once cell of void so I call it init and the one cell from Tokyo has a const new so that is the way it works otherwise it's very similar and then we have the Geto in it same thing and now it's a closure with a nursing block the cool thing now is we can have a sink over there so let's close the module to make some Moon we're going to have our debug print and now we're going to have our own devdb init.db and a weight and again here we're going to break the rule we're going to make an unwrap because if it breaks early it's completely fine then obviously the get on it is a weight that was the whole point of the Tokyo One cell so now that we have that we can go back to our main and inside our main after our tracing so like this we can trace it we're going to have a four Dev only and we're going to have our underscore debuted in it and here we are not putting the question mark at the end because it will just fail if the thing cannot get initialized and now if you go to the readme we have a starting the DB section and if you have Docker we have the password of the root postgres user as welcome and so you can take that and on a terminal and usually I have a side terminal that I leave running for a while we can run it that will run our database and so now if we do a cargo run or cargo wash run we are going to see our Trace info that initialize our database before the server start which is pretty good and so we have these three files that have been executed now another little cool trick that there is in the readme if you go to the second line you can see that there is a Docker psql on PG you can take it open another terminal and paste it run it and now you have a psql on the postgres server that we just started before and you can do slash C to connect to the appdb then you can list all of the tables that we have you can select from user so we have our demo one and you can describe a specific table here we are describing task so that is pretty cool I don't use it often because usually my unit test knows the whole thing but sometimes if I want to double check something I can start a PS SQL on my database so now again it's very cool because every time I change my server and press save it will recreate the whole database and everything if I have the cargo watch and if I don't want to recreate every time I press save I don't stop crackle watch I just code and then when I do the cargo run everything is fresh okay so that will conclude our chapter 4 that was a little bit more code now we have a very good foundation to have a very creative process and we have completely embedded the database into our development workflow okay so now let's start our chapter 5 which is going to be the first part of our model layer we're going to do our first task BMC which turned first task back-end model controller so first we are going to open the model folder which is our model module so in this module we have our own error and we are re-exporting the error and the result Alias as a module error and result so we are following our best practice and then we have the model manager so in the rest exam course I call that model controller because I measure two in this code base we're going to split the model manager will have the states and the model controllers will be the construct that does the access to the different model entities so the design is as follow the model layer normalize all application data types and access all application code that access must go through the model layer the model manager holds the internet States and resources needed by the model controllers to access the data so for example the model manager will hold the database pool the S3 client also reduce client so the model controllers like task PMC project BMC which is for back-end model controller Implement cred and other data access methods on a given entity task project and so on every access start with a main entity first and so the BMC as I've said is short for back-end model controller because when we do a UI we're going to have the mirror of it which is going to be the front-end model controller it's going to be much simpler but it will allow to expose that to the front end and then in Frameworks like accent artery the model manager will be used as appstick so this is one model manager multiple model controllers and so the model manager are designed to be passed as an argument to all the model controllers functions and we are going to see how we can protect the visibility of those resources only to the modern layer so now that we have this overview and we're going to go to our model manager and our goal will be to add our DB which will be our database pool connection so before that we're going to have to do some work but that is a goal and then in the new this is where we are going to instantiate the database pool connection to set it to the model manager so to create a new database pool we're going to create actually a little layer and this one is going to be very simple for now it's going to be store and we are going to create this as a mod.rs because eventually that can grow bigger so now in the store since it's a mod.rs we're going to have our code region modules we're going to have an error we'll create an error it's going to be a very simple one and I have a little cut snippet that create a whole bowl of plate like that so the design of the errors that I follow is based on the enum and they derived debug at least and the reason is is because a two string this is going to sound a little bit weird at the beginning but actually this is a big reason for that the two string which is I'm implementing display just format the debug format and the reason of it is because all of the backend server errors are going to be logged as a request log line and usually that will be a Json format a new line Json so I do not need to have human friendly text so I'm going to make sure that the invite name and properties if there is are going to be very self-described such as I don't have to reformat the message because anyways the selling session of this errors will be ingestion so that is why I'm following that for backend Services now for command line I like to use this error because it's very convenient to do formatting but for Tory or accent web back-end Services I usually go with this model and so then what I do is that my error implements iris from 30 and this is again is because I am in the back end and it's not because I'm sending this error to the client because you do not want to send them to the client but it's because I want to log them into new languages and format because it's very flexible and then I can push them into a cloud watch or Another Log services and obviously at the end we don't forget to implement the error trade for our error such as we can use a question mark nicely so now that we have that we're going to add one error on this one is going to be very simple because we're going to use mostly SQL X and then later SQL B so the store layer in this first episode is going to be very light and the first error will be failed to create pool and in this case I decided to have the string which is a two string from SQL X akulu also have the whole SQL X object over there that would be completely fine as well and then to follow my pattern I restart the result type Alias on this error so like that when I go back to the store mod.rs I can respond this error and this result for this module now on this one we're going to use our config crate because that is a whole point because we're going to have our DB URL in the config and then we're going to use SQL X to pull options the pool and the postgres so first we are going to Define our type Alias TB pool postgres so now into config we do not have yet our DB URL so we're going to bring our cargo to ml and we are going to organize our code as if it were on kubernetes so we're going to have a section for Secrets which in kubernetes will go to a secret config file and this is an important point the keys and the password that we have here are just welcome type of passwords download password does need to be encrypted we don't have to have all of this complexity because in a deployment that will be managed by the container kubernetes or qms or whatever on Amazon but in our case in a local development all can be welcome so we don't need to encrypt our fancy scheme here we can just build our service DB URL like this one and that would be our app user with our password and then the only reason why I have Dev only PW rather than just welcome is because I don't want both to use the same password such as I make sure that my debuted codes kind of make sense that's the reasoning behind now that we have that we can bring our config and what are going to have a DB URL and you guessed it in the load from amp we are going to add DB get amp service DB URL everything is becoming very simple now and we still have our prefix before such as we know what come from us but sometime in the kubernetes or in the docker scenario you will have other abutment variables that might not have this prefix that you still want to have so you cannot assume that all of your employment variables will have such a prefix okay so now we can go back to our mod.rs can create a new DB pool that will return a result of DB and that would be a new our Max connections and our connect and we can use our DB URL now for the max connections we could have put that in an apartment variable and actually a good strategy will be to put that in the config even if it's not in the abutment variable that is a good strategy as well that I like to have and this will be a good Improvement for later then obviously that is an await and then we are going to grab the error into our failed to create tool error and in this case we are going to do a two string because we need to stylize but is actually we're going to see later that SQL X Implement display and there is a macro that we can use as well so but this one we're doing it manually and it works fine so now we can go back to our model node.rs and we can uncomment this One Import the DB from our store DB and then in the new we're going to do let DB new DB pool type will match away question mark and now we can do our model manager and repeat the DB we're going to press save and here the question mark is going to have an error because the new DB pool is a store error and the result is from the model layer and so to resolve that what we're going to do is we bring our model error and we're going to implement the variant and the from for the error so the way that I like to do it is I have an intersection like that that say all of the modules that I want to wrap and I put a store and I do a store I import it and I do error so like this I see clearly what type of module I'm using in this error then what we need to make sure that the rust compiler can go from the DB error to the model error is to implement our forms so I put that in a code region where I'm going to put all my froms and I have a little cut Snippets where I put the store and the variant is tall and that's it so there's some macros and an addition that you can add that will do that for you but in my case because I have this little Snippets the overhead is relatively low and I like it because I have the full control but again this is more kind of a personal preference the important Point here is to make sure that you have the right error structures that the right module has the errors that they need to have and that they Cascade back up correctly okay so now that we have that we have a little question mark that is resolved over there so that basically is our Constructor that Returns the model manager so now what we want is to expose the DB the database pool only to the model layer and we want to have the new accessible to other modules such as the main.rs so the way we're going to do that is we're going to do a pub in crate model which is going to be a function DB which is going to return a reference at DB and so this in crate model restrict only to the modules that are below the model and then the function new is accessible to all of the code base that have access to model manager that is a way that we are doing it is not like a developer cannot make a mistake obviously but at least here we are putting the right constraint in our code such as this way intent driven only the model layer need to access to the store so that is a way to accomplish that so ensure that return on the SQL xdb pool reference only for the model layer so if we go to main now we have our model manager new and that still compiles but if we were going to do a DB equal model manager DB that will fail and if we Mass over we get DB's private that is pretty neat we're going to remove that and now we are ready to create our task model controller so we're going to create our permod task and that would be a single file that is enough for now and usually this model controller among the leaf side so they don't need their mode.rs having just a file is enough they will use the model errors but obviously if you get more complex you can have a next level of hierarchy so we're going to use the model manager we'll use a model result and then we're going to use the crds ILS and DCI allies under the from row that will allow us to read the data from the database with SQL X and to translate it into a struct so that is a convenient way to get this track from a PG rows and eventually we're going to have more dependencies but at least that the beginning and then we're going to have a code region for task types so typically I like to have my types at the top and in their own code region often you write them at the beginning and then they evolve less than the behaviors so that is a way that has split the code so the first type is going to be task and that is what is going to be sent back from the model get and list and so on so it's going to be our Pub ID I64 and a pub title string it's going to implement debug clone and then from row because that will come from a SQL query and then finally we need to implement also serialize because when we are going to do the apis we will serialize that into Json and then we're going to create another type which is going to be our task for create and this type is when we have the apis that want to create task so here is very important the concept of this design here is that struct up views on your database tables they are not the exhaustive representation sometimes you might have things in the tables that are not the business of an API and often time you want to track that only a subset of what you have in the database so for example for tasks for create you don't want to have people being able to reset the ID of a task or you do not want an API to be able to reset the Creator ID also modify time or the crash and time of an entity so that's why you are splitting these things into what's gets read and what get pushed so in this case for create we are deciding just in this example that you must have a title and so we're just going to implement this allies because that is going to be something that is going to come in and now similarly we're going to have a task for update and that is when we are updating an entity and right now it's going to be a little bit silly but it's just going to be the title option of string so on this one is also need to implement these allies if we look a little bit our three types we have the task and that is what gets sent back from the API from the model layer and then we have the task for create and task for update that is things that are sent to the model layer to update the data structure so because of that only task needs to implement size and test for create and test for update Implement these highlights okay so now that we have that we can start implementing our task back-end model controller and the design pattern is relatively simple it's going to be a struct task BMC and it's going to have no property for now and then we're just going to implement test BMC and we're going to implement our crud but we are starting with crowd but eventually you can have any kind of method so it can be very specific to The Entity so the goal here is to have the entity BMC being specific to what they need to be and then we're going to see how we can share implementation between bmcs so the create is going to be relatively simple it's going to return a result of I64 so by Design what we want is that the crate is very granular it doesn't have to return task back so we're going to make it very efficient and it's just going to return the ID of what has created but it's going to take the CTX which is part of our create CTX and that is the user information of who is requesting this action and so the web layer is creating it in the case of a web application but the point here is that the CTX is decoupled from the framework from aksam ortory or whatever so this way once it's in the model layer it doesn't have to know how the request was made it knows that you can always expect a CTX which will have context information needed to make sure that the action can be performed and the ultimate goal of the design which is going to be in future episode is that you put the security layer at the model layer not at the web layer this way you can reuse the model in different type of services for job services or web services or whatever and then the web layer is just about authentication creating the CTX but then Privileges and everything is at the model layer so that really simplifies a lot of design down the road and so this is why we are putting that early on into the system and then we're going to have our great model manager which will shorten by mm and then we're going to have our task C which is our task for create now the CTX and the model manager those are going to be in all of the BMC functions and then you have the function specific in a way arguments which is in this case the tasks for create so then in the create is going to be relatively simple we are going to have our we're going to get rdb and we can do that because well within the model layer that will return our DB pool and then we're going to have our let result and we're going to call SQL X and we're going to do a query as and so that is the way it works and we do a I64 Tuple and we're going to give our SQL here so insert into task title and the values which we parametrized and we're saying returning ID so another best practice that I always have I never do raw SQL it's always prepared statements so at least that is a nice protection that you get almost for free from prepare statements so our returning ID map to the I64 that we have over there so that is a way that SQL X works you have another generic here that you can let rust in Fair so that's why we are putting the underscore so now that we have that we can do a bind task underscore C the title and then you are going to do a phase 1 DB and at this point rust has enough information to unfair that while using postgres and so on so everything will be compelling nicely and then because it's a future obviously we are going to do a weight and the question mark and here we guessed it in our model error we don't have the SQL X so we're going to add that right now we're going to add a section we're calling it externals SQL X SQL X error and now the trick is we do not have sales for SQL X error so we could use the technique that we use the tostring and have a string over there but this actually is a pretty neat one so we can use set it with with our study as a notation and display from STL and set it with is really awesome so I really strongly recommend it so now we can do a study as a notation which has to be before the satellites and then what you do is you add an annotation at the SQL X which is a steady as and we do a as display form Str and then you pass the error so SQL X error Implement display and from SDR and so that allows then the study us to be able to do the right thing here so now that we have that we can go back to normal and we're just going to implement our fronts which is going to be from SQL X error to SQL X and then that's it well done now that will solve this error and so now the cool thing is anything that come from SQL X is going to be managed by our model layer and now one of the things we can do is our result over there is is actually a tuple of I64 so what we could do is this structure it and just having the Tuple that will do ID like that ID is just r64 which is what we wanted and so we can just do OK ID and that's it we've done our crate so now let's go and create some unit tests it's always good to start early with a unit test and you don't have to stress too much about having high coverage it's more about doing some is much better than doing none so at the beginning anything that helps you just do a unit test for it even if it's not perfect and then after you clean them up over time if you do that early it's going to speed up your development because it's actually a good cues on things that works and doesn't work and it will also make your code much stronger and when it's time to add more unit tests because your code is more mature is going to be much simpler the same way here you have a cut Snippets that create the snippet I'm using allow unused but that actually you can remove it doesn't really useful in this case because we already have it at the module I always use super of stars and then I like to use any house such as I don't stress too much about errors so now I have another snippet here to add a Tokyo test and we're going to test create okay so the connections that I like to have is I always preface with test underscore that is optional but I usually have the function that I'm creating and then I add the suffix okay and then if I have different tests on the same function I'm going to say okay simple for example and okay double crate for example to make sure that I can now to only one when I want and then for the error when the test needs to check if something fell the convention that I follow is I put the air underscore something or just bitter air and right now because we have only one test grade I'm going to do a test create underscore okay now that we have that we are going to do the setup and fixtures and for that we are going to want to use our Dev util and it will be nice to be able to use something that will return a model manager and right now the unit Dev doesn't do that so if we bring out any Dev we see that it doesn't return anything which is what we want because we are calling it for main.rs and we don't want to create an extra model manager so what we're going to do is create a another function which is going to initialize test environment so we're going to have an async init test and it's going to return model manager and now on this one we're going to follow the same pattern here we're going to have init which in this case could be called instance and then I'll get on it that is going to call init Dev and then call our model manager new and here again I'm breaking the rule why do I unwrap because this is just for unit test so if you crash it's better that he Crusher isn't too late so it's completely fine I don't have to over design it but that returns a reference of model manager from the init but what we want is a model manager so what we're going to do is am clone and that is completely fine because a model manager is built to be clonable such as you can clone it and everything will be below Arc so clone is acceptable now that we have that we can go back and we can have our Dev utils we import it and we're going to do init test await and now we have our mm and again it doesn't fail because it will crash which is fine so we don't need the question mark and here we're going to do the CTX and we're going to have a root CTX okay so what is the root CTX so if we go into our CTX right now it's very simple just have the user ID and we have the Constructor which is actually Constructors and we have the root CTX which is going to return a CTX with a user ID of 0. so that is going to be our root CTX so result will have another path for this code what we do is we have the CTX having the concept of root that the system will use to make the calls that the system has to call and then we have the new that would take the user ID so that would be our auth would create the CTX with new and then in this case we are making sure that the ID cannot be zero with this design were intent driven we know that when we're asking root CTX because we don't have a user and then when we have a user we know we can do new but we cannot get the root CTX with the new so now that we have that we have our route CTX so we can make our call we are going to have a fixture title now one best practice that I follow is a prefix all of my fixtures with FX underscore so this way we always see very quickly what is the fixtures or what is the value that we're getting back and then the next practice that our followers like to have the test create okay matching the function name so that way it makes the things a little bit more unique and I could actually add tasks as well to make sure that it doesn't conflict with other create unit tests so now we can do our exec for exec we'll go to create our task C with the fixture title we had there will be an argument here to say that the task underscore C is more a fixture than a value but for now is going to work fine and then we're going to call our task create with our CTX and model manager and our test C that will return the ID and then for the check we're going to use SQL X and we're going to do a select on title from task we have the ID and we're going to bind it to ID which is what we get there and then we're going to do a fetch one and we're going to give our DB we're going to do 0 weight and then we're going to make sure that the titles that we get from the query match our fixture title we're going to do some cleanup so that is a whole debate about who is cleaning what when but in Rust this way works pretty well and so we're going to do a delete from task where the ID is one and while binding so even when I do unit test I actually bind I do prepare statement I don't construct fixed select statement so we're going to do execute a weight and then in SQL X we have rows affected that return the count of how many rows were affected by this select so we're going to do another test which is going to make sure that we deleted only one that might not be necessary but you know it's pretty cheap to do anyway now we're going to run our test so we're going to bring our terminal make sure to stop the cargo run this is one thing I've learned from before is on Windows when you have the two running it creates issues so if you go to the readme you have a unit test section and we can do a cargo test watch with a test that will allow us to do kind of a live reload testing actually this is not really Repro it's more watch and so we can do it this way where we are very specific or we can take that and we're going to add a little filter which is going to be our test crate okay usually what I'm doing to test only one by one and I do the dash dash no capture to make sure that I have the printed in and I'm going to show why later and then when I do that we have our test success so that is pretty cool now one of the reason why I have the dash dash no capture is because typically what I do is I do my debug print with my little characters and then I can print title for example and that will run and then when it's okay then I do the assert so often the assert is commented out or I don't even have it and then when I'm happy with the data then I do the assert that is a little bit my workflow so my goal all the time is to try to have a workflow where my unit test is part of being lazy to write code faster so there's one more thing that we need to do and this is important to do it early is this test create is testing a database that is getting reset and everything but test in cargo tests are running parallel so it's very hard to synchronize them because you are not even sure if they are going to be in the same process and everything it's get very tricky and so one thing that I like to do for all of these tests that use external resources is to use a little libraries that we're going to add into our cargo to ml Dev dependencies which is serial test so this one is pretty cool and very simple you go to your test function and you just add this annotation then if you make sure to import a test IO and that will make sure that everything that is annotated with this annotation will get executed sayary so now all our database tests can use data Natasha to make sure that it doesn't work on each other so that will obviously throw down unit testing but they will make it much simpler now that we have that and we're going to implement our get and the same signature again we're going to have our CTX and our model manager but this time the argument would be the ID it would take an ID and it will return a result of task so for that very simple we're going to have our DB and then we're going to do our SQL X select star from task where ID equal dollar one and we're going to change that later to make it much more structured right now that works very well and we're going to do a bind ID for this data 1 obviously and then we do a fetch optional and that will return an option of task so the good news is we can do an await question mark which will fail if there is a select error or a database error but then because we have our fetch optional that will return the option of task so now we can transform it into an error of our application of our model layer which is going to be error entity not found and we give The Entity name so by conversion we're going to say that the entity name is a table name and then the ID So that obviously is little implementation details of how you want to reference your entities but the pattern here is very simple and very powerful so again the first question mark is going to fail if there's a problem with a select statement or the database and then the second one is worth taking the option none and we're making our own exception saying that by convention a get always need to return a task and if it's not it will return an entity.not Farm and then we can return okay task the error we haven't even imported it so we're going to use a model error and then the model error doesn't have entity not found so for that we are going to bring the model error and on top we are going to add entity not found with the entity which is a static string and it's fine to be static in this case and the ID and all of that implements iOS because it's simple types so the entity.1 will get the airlines very nicely okay so now that we have that we're going to implement a new one and that is going to be our delete for now and for that similar we have our DB we have our delete from task where the equal.1 we bind on the ID and then we're going to execute a weight and then we're going to get the row affected so again the same concept as before when you delete something the conversion here that we're going to have in our API is that it must be there otherwise it will return an error we're going to do a if count equals zero we're going to return the same error that we had on the get because the entity was not found and then we can read internal okay void so now that we have the get and delete can go to our test create okay and then when we do the check and the clean we can actually simplify the code quite a bit we don't need to have the select anymore now if you really want to be dogmatic with unit test you shouldn't be using other function and some functions but in all practical matter I think in this use case I think it's acceptable and definitely much simpler to use the model function to do the check and the clean so what we're going to do now is we're just going to do let task and we're going to do a test BMC and we're just going to do a get with ID and now we're just going to do an assert on task title and then that's it well done and then we're going to do the same thing for the delete it's going to be even simpler task BMC delete CTX model manager and then the ID and the await and if we bring our cargo watch test press save and then boom everything works fine now we'll go to create another unit test to test the get not found so that's going to be underscore error which is going to follow our name information over there and we're going to add the suffix not found and now for our setup fixtures we are going to init the test the context we are going to have the fixture ID 100 and we know it doesn't exist because we start at 1000 then we're going to do our executes and that will do a get and in this case we don't want to have the question mark because we are going to test our result which is going to be a result of task or error so for the check we're going to use a third and then we're going to use another macro which is matches and matches works by comparing the result that we get from there to that and so we have a entity and our ID and by the way those guys cannot really be viable so you have to have the the retailers there and now we can do our ok so if we do our cargo test and we're going to go back and then we're going to test our test get error and that will be the not found very simple and now if we were going to check the wrong entity name and we do a save then our test will fail which is what we wanted so let's make it work and we have this unit test done so again we don't have to go crazy and do everything but it's always good to do few so we're going to do the delete as well so the fixtures we're going to exec and then do the check which is exactly the same one and then that's it now if we go back on top we're going to implement our list and for now in this episode The List won't have any filters or other buy or limit or offset we are going to do that in future episodes so we are laying down the ground right now such as we can scale very nicely but for now the list is very trivial we do the DB we do a select we do another buy we do a select star for now we're going to fix that later and then we do a fetch all that will return a vector of task and then we just do okay task now one thing that I like to follow the crud naming commission so create request which is a get on the list update which we don't have yet and the delete that is the way that I act two other things so now we're going to go down and we're going to create our test at the right place for list so now the thing in the list here is the fixtures will have to create many tasks and that could be kind of cumbersome all the time so what we're going to do is we're going to go to our debut deals and next to the unit test we're going to create a seed task so that is a utility that we're going to use in our unit test and so it's going to return and here we're going to be explicit model result of vector of task and it's going to take the CTX the model manager and the title can just be an area of string so we're going to do the import and so in this case our debut here's will have a dependency with the model layer which is okay the other way around this node but this way is okay and now we're going to have our mute task and then for every titles we'll create the task with the title and then we get the task with the ID that we got from our crate and then we push it to the vector and now we're going to retail so very simple now you need to make sure to import everything and now that we have that our unit test is going to be much simpler we edit our test we have our CTX the FX titles is going to be an array of static strings which is fine while following our Convention of duplicating the test names over there and then we're going to call our Dev utils see task and give the array of titles and we're going to exec and that means that now the task will get this list over there at least so today because we don't have this filter kind of business this unit test is not very strong it's kind of a little bit clunky because if other tests create other tasks you will see them as well so you don't know if you are the only one hitting the database at any given time so the way that we're going to do it in this case is you want to make the name as unique as possible for this task and the check we are going to filter through all of the tasks and we're going to make sure that it starts with our prefix eventually our list will have a filter API so we'll be able to be much more precise into what we ask our back-end model controller and then we collect to get a vector of task and now that we had that we're making sure that we have only two tasks and then we're going to do the clean so now we have the task over there so we can eat there and then we just delete like that so this one is not a very robust unit test because we are doing this like prefix stuff but it will work fine for now and again sum is better than none because when this one breaks then you just fix it and make it better it's always a good way to start and the important point is to know what's needed to be improved is not to always hit for Perfection now one little addition that we can do on this one since we don't have a filter is to actually make the name a little bit more unique so if we go back up we could make it Dash task such as if we have another test list Founders entity at least it won't work on each other so that is a very cheap tuning that you can do at the beginning and then eventually once you have the filter and everything you can get much more precise and now we can bring our terminal and we're going to stop it and we're going to test the list we could do a test list like that actually what we can do is a test model and we go by package now so we're going to test all of the task tests and now if you press enter Then you get everything run like that that is again very nice the way you can filter things okay so that was quite a bit but we already have a very good start with some very good unit testing already now again usually I use that not only after I code my function but often I use that during the implementation where I do something like that where I don't have the assert and I do just print it in and it has debug like that and then I can see that on my terminal I fix the code I make it exactly the way I want it and then I do the assert that is the thing that I was saying before okay so that will conclude this first part of the model layer eventually we are going to do the update function and that will be part of the next chapter so now we're going to do the chapter 6 which is going to be our second part of the model layer and that will be the shared implementation so when we have our task BMC what we can see is all of this code that we are doing for the entity task we're going to have to do it for other entities as well so everything here in the insert in the title and so on we're going to have to repeat it for other types and obviously that is going to be true for all of the answer function the basic code will generally have lot of common behaviors across entities so what we want is to be able to share implementation between all of those bmcs so typically in most languages you will do that with animators but in Rust For Better or Worse we do not have any evidence I think it's far better but that can be arguable regardless we do not have inheritance and what we don't want is to try to mimic henriettans invest because that will be a misfit but in verse we have many constructs that can be extremely useful to have shared implementation and we have three big categories the first one is a traits you can also have blanket and default implementation or just the traits alone and then we have the generics that bring monomorphization and dynamic dispatch and finally we have the macros the declarative macros which is a macro rules and the procedural macros now to share the implementation of the bmcs we are going to use a trade we're not going to actually use blanket or default implementation just the traits with the generics and that will give us monomorphization and then we'll be using the macros later in the chapter 9 where we will use declarative macros for the rpcs routing okay so with that in mind let's go back to the code so we're going to open our model mode.rs and we are going to create a sub module which is going to be base and we can just create a single file and it's important to note that all of those are kind of private to the model layer and only the task is going to be public and then obviously we are re-exporting the model error and result from our error module so now we can drill down into our base we're going to use the CTX the model manager and the model error and result we'll be using also the from Row in this case and now we're going to implement our trade and we're going to call it dbbmc which is for the back-end model controllers that are DB related in a way and the only properties that it has and this is a little trick here is that we can use const so that will allow us to define a cost property for the DB BMC trade and it's going to be static Str and that will be our table name so now that we have that we can create our async function which is going to be just a get and not that in our case we're not going to create struct and trying to mimic henriettans function is completely fine as a shared implementation in fact we are going to see that we can use generics to pass types information into functions so here is how it works we are going to add a generic to our get and we're going to call it MC for model controller and that will map to the concrete type that will Implement rdb BMC and then we're going to have e which is going to be for the entity The Entity that it returns and now we can have a functions that take our CTX we do underscore because at this point we are not using it and then our model manager we're going to use and then it's going to have the same signature as the specific get which is going to be the ID i64. now it's going to return the result of E now why we have this little yellow line is because one not using MC yet we're going to use it and Define later now we are going to Define our bound our trade bounds with where and we're going to say that MC needs to implement DB BMC so we will have the table cost and then for E we're going to say that is a from row because we are going to use SQL X so we want to make sure that we can get the data back as a normal extract that implement the from row now the formal has a little trick is that if you press save you are going to have this error I'm going to see that it takes a generic and actually it takes two things a generic and a lifetime so it's from row Dr Lifetime and then the r generate the row so one thing that we might be tempted to do will be to add lifetime over there and then we do a lifetime R here and then we're going to do the P0 so that is the type we need to have I'm going to import it and that will actually compile so we will think that we will be happy but this actually it doesn't really make sense because the lifetime R is not mapping to really anything useful over there so the compiler won't have its work to be able to trace back who owned the data so the right way of doing it and long story short is don't put it there and we are going to use a higher trade bound which is special notation with four and a lifetime r and now that we have that we have the E which is a 4 from row which fits well into our get and our e and now it's kind of independent and has its own lifetime in a way now there's one more things we need to do in our e since e is part of an async function we need to add the auto trade unpin and send so that is important such as it can work well with the async that and the higher trade bound could be the subject of another video but this is a way to define our e for our application and now that we have that we can go to our task BMC get we take all of that and we are going to paste it there now task is obviously not task anymore but e so we need also to change it in the select statement and in our error entity not found so for that we're going to create our SQL with a format we're going to take that put it there and then we're going to replace task with our MC column column table so now we can remove the SQL and just put it there now our task is not task anymore it's e we're going to rename the variable as well entity to make sure that everything is consistent and now the last one is our error and you guessed it it's going to be MC table and then that's it we have our function is very generic and we use in many places for many entities we're going to see later how we can even make it better for the set X star but this is already a very good start now if we go to our past BMC what we need to do is Implement DB DMC from our base for task PMC and now we're going to put our Constable static CR task now we can go to our get take that function and replace it by base column command get and the way we are going to pass task PMC self is with the turbo fish notation and we're going to put self and then task that is a way that we are defining those two generic that this function needs to take then we're going to pass our CTX our model manager and our ID and that is a sink a weight make sure it won't import the base we can remove the underscore on this one because this one is actually using it because it's passing it down and now the task that we have there which map to that result task the compiler can infer it so what we can do is just put an underscore so that's why sometime if you look at code you see the underscores because we let the compiler in Fair things when it can and now if we bring our cargo wash test press save everything was fine so that's why it's good to have the test because at least we know that we didn't break anything so this pattern is going to work very well for the get because our e that we have for get is going to implement from rule so that's great so forget and list that is going to work perfectly fine now what about update for example if we had a date over there and we're giving our ID still and then we have our data but what would be the type of data because the firm row is not the two rows so if we wanted to have Intel generic side we need to be able to pass something that can become a tool and that is where we're going to use a little library that I've created with a SQL B which is a SQL Builder on top of SQL X so the goal of SQL B is not to replace SQL X you still use SQL X in many places but sometimes it's a very convenient way to be able to build your SQL with a higher level kind of Builder pattern kind of things SQL X has a lower biller pattern as well that you can use but this one is going to help us a lot to solve this problem so now that we have that what we can do is the following so the bare Bond we can even change the get so for the get we can change our SQL X query as for SQL B and we are just going to do a select so that will return a select SQL Builder from SQL B and once you have that you specify the table so in this case it's going to be our MC table and after we're going to do a and where ID equal and we're going to do the ID like that so this is kind of a builder where you can add multiple and where and so on so for complex request use SQL X but for simple one that can provide a very good one and eventually it will support more complex cases and underneath is just use SQL X so that follows the Builder pattern so we have a select Builder and then that will return it over there and then we have the same API as SQL X which is fetch optional and we give our SQL X database pool and then we do their weight and then we still have the owner and we're going to see later how this can also help the case of create and update so now we are going to bring our cargo test terminal press save and then everything works so we didn't break anything now there's one more optimization that we can do on the get alone is the SQL B select by default we do a select star like we had before but sometime and moreover in our design the type is just a subset of your table so you don't want to do a select style that will be inefficient and there is data that you might not want to get back anyway so what we would like to do is reserve to have their star we would like to have only the field names that it takes and that is where SQL B is also useful so if we go to our task types we can add a macro which is fields and we import it from SQL B and that have couple of functions couple of methods that are going to be very useful to compose your SQL like that so this is not about being a big orm it's more about being a SQL Builder so in this case for example SQL B doesn't know what types match to what table but it's more a way to build your queries and compose them so now that we have that that we have e annotated with Fields the E will Implement a trade which is Hash fields now inverse you can do your trade bound like this you can duplicate it to add it up or you can also put it there in one line with a plus sign I like to have it below in this case because the one on top is already complex enough so I like to split them in this particular case but that is a personal preference now that we have that we can do our columns which is part of SQL B SQL Builder and then we are going to have an e and the E has a Associated function on the type not even on the instance which is field names and then voila what we have is all of these names ID and title is like if we had them in the select like that and it does the double quote on all of the fancy postgres stuff and then if you have other properties like for example something else and you don't want it to map it into a database you can annotate it like this way to field Skip and so in this case it won't be a field it will skip it and if you want to rename it for example you have desk and in the database is description you will put it name description and if you do that and that will match it up there like select ID title description and now if you do that do not forget because that in fact is just for the field names and the two row but from the form rows to be able to have the fetch optional and fetch all walk do you still need the SQL X naming connection this one eventually I might have SQL B understanding the SQL X3 name I don't know if I want to get there but this point you can do this annotation and it works very nicely so in short the SQL x-ray name is for the from raw and the field name is for the fields so that is the way it works so now we can remove our SQL and we are all set on our get now we can copy it and we're going to do our list and that is going to be very trivial and the reason why it's trivial is because we do not have yet the filter the order by the limit and so on and that is something that we are going to do later now the result is not going to be is going to be a vect of e the bounds are going to be exactly the same so we've done all of the hard work and now in our select result to do and where we are not going to put any filter and it's going to return a vect of e and that is going to be entities and we are going to remove the OK or entity not found because it doesn't make sense and if there's nothing it would be an empty array and now reason to do fetch optional we're going to do a fetch all so the SQL B is very matching the SQL X API and now that we have that we can actually add one more thing which is the order by ID so that will allow us to do a select and be kind of deterministic into the order Buys so by default order by ID and then you can enhance your apis to allow the user to order by other things which is very important obviously now in SQL B if you put an exclamation mark that will be a descending order by and without exclamation mark is a default which is the ascending one that is a little trick here so if we close that and we're going to go on top we are going to create our create so our create will have the NC model controller generic which will be DB DMC it will return a result of I64 so that is the API contract that we have so we're going to normalize that across everybody and then it will take the CTX and the model manager and then it will take a data of e and this e the only thing it has to implement right now is Hash fields and in SQL B has Fields it will allow you to extract the fields name and value such as we can inject it without knowing what is a concrete class so that is a little trick that the SQL B is doing so now that we have that we can get our DB and we are going to call the LED fields that are not known and that will return a vector of field and we're going to see later what the not known means so now we're going to do our SQL insert and we will be returning the ID then we have our table and then our Fields so the strategy that we are going to take right now is that our field which is going to be what we put in our data is going to be the not known fields which means that any value which is either not an option or an option that is not known will be set and otherwise if it's known it will be ignored now this is just a strategy that we are taking in this particular case but if you want you can have all in this case you assume that your e will have all of its data set to the right value and in math set neural to the database so that is you have either strategy and this point comes to both for now we're going to go with the not known fields which mean that if we wanted to set new to the database we'll have to have another way of doing it perhaps under apis or perhaps a flag or something like that and now what we're going to do is we're going to do a returning ID and that takes an array of strings so that is what will allow us to get the ID over there which is what we want to return and now we do a fish one to be able to fetch the ID and we do our weight and then that's it we do our OK ID and we're all set and so that our insert is creating an insert Builder and then we have our Builder pattern over there so now that we have that we can do our update and I'm keeping the crude order over there so this is why I move a little bit up and down so now for the update we are going to have the same thing here the model controller and the E we are going to return void so our BMC is going to be relatively granular and then our web API the way we are going to design it is going to be a little bit more convenient but obviously you can have the web apis at map one to one to your vmc API we have our data e but our convention is to have our ID so we don't have the ID part of e in our case the ID is outside because it's not something you are modifying this is something that you are using as a reference and then data is a data that you want to modify whatever you are according to so now that we have that we're going to press save it's going to do some reformatting we have all our arguments over there and now we can do a OK void and raise it to do an insert we're going to do an update and that will be our update SQL Builder and now we don't need the returning ID so we're going to remove the first one and just do an exec and that will return count and obviously we do not want to forget the end where ID equal ID now that we have that well I'm just going to check that the count is not zero and if it is we're going to return our entity.net found that is going to be part of the contract that we want obviously that is up to you when you develop your apis and then if it's okay we're going to move the OK over there and then that's it that is as simple as that and now we're just going to do the delete which is probably going to be the simplest one so the only thing it will take is the ID then we have the DB I'll delete the table then where the exec and then we're also going to check that count is not zero and then that's it and this if we go back up that will conclude our shared implementation for the base at least the beginning of it we have our dbbmc with a table constant and then our crude shared implementation that we can use in our specific back-end model controllers so now we can go back to task and what we're going to do is we don't forget to add fields to the test for create and test for update so the form rules is to read and the field is to write in a way except that we are also using fields to optimize the fields names that we want to get from the database as well so now that we have that we can go back for create we're going to replace that by the create function and it's going to have the same signature so now that we have that the get we already have done it and then the list are going to do the same thing base ourself underscore and then the same signature and that will return our result of vector task and then we can add the update and it's going to be trivial now and it's going to take the CTX the model manager the ID and the task for update and then we're going to do the update itself underscore ID and task queue you are going to do the same thing for did it and that's it so the nice things about this design is that all of the application code is going to call the specialized BMC task BMC project BMC or whatever entities you have and then you can share implementation behind the scene and those implementation those implementations that you share doesn't have to be strucked on within a trade or all these kind of things so this is where we have to unwind a little bit from the Java days not everything has to be in a class or in a struct or whatever or in a trade and now if we bring our cargo wash test press save rebuilding everything is okay okay so now we can build some unit tests and we're going to build our update and the reason why it's important to build that is as you refactor you want to make sure that the basic doesn't break the strategy is always to have a good 20 or even a good 10 where things tend to break and you always want to have that as early as possible and then after you can climb your way up so we're going to have our Dev utils any test our CTX or fixed your title we're going to do the task there are one on this one and then our new title or just going to fix it we're going to seed our tasks such as we don't have to create it by hand and we are doing the remove zero to pop it out basically so we have our task of a cell that is a little bit ugly but that works fine and now that we had that we're going to do the exec and that would be an update on CTX model manager or fixed your task ID and then we're going to create our task for update which is just going to have our new title and then we do the await and then on the check we get it with our fixture task ID and then we're going to do an assert on the title and on the fixture title new so now if we bring our terminal with our cargo wash test we already press save so we can see that we have it there and then if we press save again we're going to have it there and by the way the order even if we have it serial is not guaranteed so that's why we have it in different places so now if we go back up and we're going to review what we have done so what we have done in the last two chapters is we have created our model task module which is responsible of all of the types and behaviors that relates to a task entity we have our task type task for read then test for create and test for update then we have our struct task PMC that implements dbbmc such as you can use a share an implementation of our base bmcs and then we have our implementation for the task pmcs and now the implementation today are just a one-to-one map with base bmcs but that has BMC could have specific implementation for example log tasks that will make sense only in the task concept and that will go there as well and in fact later in the user BMC we are going to see that we are going to have behavior that only belong to the user BMC struct we also have created our tests such as we have the basic test model task and then in this chapter what we have created is our base BMC with our trade that has for now one property a cost which is a table name and then the share an implementation for the basic credit functions and in this case just a function and using generic was completely enough we didn't have to put them into a struct and other kind of traits and that gives us a simple but scalable model to scale our bmcs so now let's do chapter 7 which is going to be password encryption and user login so let's look at our request flow so API login is going to be over https so during development is going to be HTTP but in production it's going to be over X GPS the payload is going to be a format username and a password in clear when you get into our Handler the Handler will use user BMC and we'll do a first by username from the database that will return a user for login struct which will have the username the password and we're going to have this multi scheme here and then the second part will be encrypted obviously one-way encryption and then we're also going to have the PW sort by user as well as a token sold by user so we are going to have a secret at the application Level in the application config and then on top of that we're going to have a sold pair user so it's going to be relatively secure here so then the login will do an encrypt and a validate so in the encrypt that will generate our PWD that then we can compare to the user PWD that we got from our user for login and then in the next chapter we'll do a create token when we create a token with this format and we'll put it in the cookie that will be the next chapter so let's go to our model mod.rs and we're going to add a module user I'm just going to create a single file that is enough for now and then if we get into it we're going to bring our create schema SQL make sure to add the comma and now we're going to add our auth properties which is a password that will be a 256 should be enough and then we have a password sort we are going to use uuid for that and we are going to actually have the database creating a truss so we could create it on the rest side but actually it's good to have the database doing it and we are going to follow the same pattern for the token that we are going to use in next chapter now that we have done that we're going to use the CTX the base and the model manager the model error and result and then we'll use a set a the SQL B and the SQL X and then we don't forget to use the uid I'd like to create my types under our code region and we're going to have our user type going to be the serialized because that is going to be what we're going to send back we're going to have the from row and the fields like we had on task and it's just going to have ID and username the very important point is that the user type since we're going to stylize it you do not want to have password we're going to have a user for create which will have these IRS and that will have username and the password in clear so that is what is being sent from the client or even from some server apis that will create a new user with a clear password and that is only a desire eyes when it comes we're not going to have an API to register so we're not going to really use this ions so it's always good to have that anyway from the start and we're going to do the user for insert so in the user phone said that is when we are inserting a new user and that is more kind of an implementation detail of the user module so we don't really need to have it Pub so we can actually remove the perb and then we're going to have our user for login that would be a read only so that is to have the information necessary to validate the login so it will have the from role and that will be our ID our username and then we have the password and token information which will be their password which is the option of string and that would be encrypted with found the schema ID Palm in the next episode we are going to do the multi-skin but if you start by encoding your scheme as part of your password so then after you are going to be feature proof when you want to change the keys or the algorithm of your password without locking the user out and then obviously the dot is going to be fully encrypted and then we're going to have our password sort and auto consult which is going to be new uid and SQL X support the uiid so all is good and now we're going to create another struct which is going to be the user for auth so we're going to make a distinction between the two so the user for login is for the login logic and the user for auth is for the authentication logic so we are going to have the same derives from raw and everything else and it's going to be ID username and then we just need the token info now so it's going to be our token sold so in a way this one is a subset of the user for login okay so now we're going to use a nice pattern here which is going to be the marker trade so we're going to have a trade which is going to be user buy and this trade will have the trade bound of as Fields as well as our from row make sure to import the head spheres from SQL B and now that we have that we're just going to Mark our user tracks with this trade so we're going to implement user by for user we're going to do the same thing for user for login and same thing for user files so in a way this trade didn't add any information to these three struct but it groups them together so now this three struct share this trade so that later is going to be very useful we're going to see how we are going to use that and we are also adding the trade bound over there such as we don't have to repeat ourselves later so now that we have that we can close our user types we are going to create our struct usual BMC and then we're going to implement our dbbmc for user BMC we're going to put our table map to user we're going to create our implementation and the first one is going to be our get now the get will return a result of not user we're going to make it a little bit more generic but not too generic so it's not going to be result of user but it's going to be result of e and now we have the E which is userby so this way because we have our marker trade over there that will make sure that only these three types user user for login and user for auth can be returned if we were not going to do that application code cool call user BMC to get other types we'll probably get a SQL error but that would be the wrong intent so this way get is still a generic so it's flexible but it's kind of limited into the typical returns so now that we have that we can just do our base get and even do the underscore so obviously we can put e that will be fine too but underscore will be fine and now in the case of user we do have a specific get in a way which is going to be a first by username and it's going to return an option of E take the CTX the Boolean manager and the username the API convention I like to follow is when we do a first we will return an option of e so none is an acceptable answer of a first request however when we do a get none is not acceptable it has to be found and now that we have that it's going to be relatively easy so we're not going to be able to reuse the base share implementation and in this case we're going to have something specific to user we're going to still use SQL B but you can use obviously SQL X directly or even easier or whatever it doesn't really matter we are going to do a select table we are going to add the end where username equal username and we're going to do a fetch optional which will return an option of e which obviously e is of trade user by now that we have that we can replace our to do with OK user and so that's it in fact we can do our little unit test so I have a snippet and then we're going to add our use crate or serial and then we're going to add test first okay demo one so in this case I add the suffix underscore demo 1 such as a little bit more specific and eventually I might have okay something else or whatever and then that would be a Tokyo test because it's a sync serial because it touched the DB and now I do my setup fixtures demo one and then we're going to do the exec with the first by username we do the question mark which will fair if there is a select issue or database issue but it will return an option of user what we want to make sure is that there's a user so we're going to use context from anyhow with a message and so we are making sure that now we have user and now that we have that we can do a check that the username match our fixture username and now we can replace our and with ok we need to import the anyhow context trade if we bring our terminal and we do a cargo test that will work completely fine and then just to make sure that we didn't break anything else we could do a cargo test of everything and we get our user test so everything is good now the cool thing about that is we could have done a user for login that will have worked exactly the same in fact if you press save that will work and we could have done user for auth as well because these three types Implement user buy and rust will generate all of this function like underscore underscore user user for login or user for auth so that is what is happening behind the scene which is monomorphization so it's going to create more code which is very productive to use and it's a good code design to have so now that we have that done we can start doing our login Handler so let's step back first and look at our login flow so what we have done is that so we have the first by username which can return user for login or user for auth and now what we need to do is the encrypt and the validate and in the next chapter we'll do the token so for that we're going to our main.rs and we're going to create a module which is Crypt we are going to do a crypt mod because it's going to have some modules one for password and one for token and then we are going to get into our Crypt module and for our encryption and our encoding we're going to use shoe crates so first we're going to use Rand for the random numbers to create the keys we're going to use hash Mac and Chateau for sha 512 to encrypt our password as well as our token actually now you can use any encryption algorithm that you want this blueprint doesn't really dictate one or the other this one is a good start to start with and then finally we're going to use base64 URL so the goal here is the opposite of encryption the goal is to be a relatively efficient way of encoding and decoding bytes into a safe character set and then we have the uid that we already had from the Baseline so now we're going to create our modules code region and we're going to have our error so that is what we're going to create right now I'm going to use my Snippets to have my code template and that would be to be coded for later so I start with that right away such as going to be simpler later and now because this error might be Cascade back up into the web layer that will need to be stylized for the request log line we are going to make it serious highlights okay so now we have that done we're going to re-export our error on our result then we're going to use hash Mac and Chateau the Sha 512 I just normalize everything to 512. and then we're going to create our encrypt content struct and that is going to have two properties which is going to be the content which is going to be a clear content and the salt which is going to be a string the reason why both are string is because we want to keep it relatively flexible such as when we change our encryption scheme we want to tie to any specific types so everything is always a kind of a two string anyway and then you can go to a byte array from a string and now we're going to do our base encryption and right now everybody is going to use this one but again eventually you can have different strategies and different transcription for different things and so what is going to take is going to take the key and then it's going to have the encrypt content then it will return a result of string which is going to be our basic C4 URL and the encrypt here is always going to encrypt into a b60 for you which is b64 URL and the reason why we encrypt it into b64 URL has nothing to do with security obviously because base64 has nothing to do with security it's just the fact that you have a safe character set that can be shared and stored and pass around in a very very subtle way so that's why we're normalizing everything into this basic C4 URL then the first line will be to destructure our encrypt content into content and salt then we are going to create our hash Mac shaft 512 with our key that we got there and then we are going to map it into our our error for that we're going to bring our Crypt error and we're going to add our first error which is for key and that would be a key fail hashback and we don't really need to capture y for now so typically if I have a doubt I don't capture too much information if I need then I make sure that I capture it so the way that you add the content and the salt is this way we do an update and we do a Content as bytes and then we do an update and a sort as bytes as well and then we're going to finalize and b64 URL and code so we're going to do the smash result we're going to finalize and then we're going to get the bytes and then once we have the bytes we're going to do a base64 URL and then we are going to encode our wizard bytes and now we can return our results and then that's it that's the basics so obviously you can use any encryption scheme you want in this case so you could use argon tool or whatever now that we have that we're going to create our unit test so we're just going to use two pair and anyhow and we're going to use our rent because we want to create a key so the check is really going to be basic it's just more to make sure that everything works fine and in this case for me it's more to look at what gets generated so we're going to do a test encrypt into b64 here and okay so our show is going to stay here okay and for that we're just going to put the test we don't need to have Tokyo test it's not a sync and our setup will just be our key 64 bytes for 520 bits and now we're going to fill the bikes with random we are going to create our encrypt content and then we are going to call our function now what we really need to do a full test is to have a fixed key and pre-compute the arrays and check that but right now the only thing we are going to do is going to be a little bit kind of a fake one well we're going to encrypt it again and then we're going to check that the two match so it's not really a rare test but it's better than nothing and we're making sure that the very basic of this function works well so now we can bring our terminal we're going to do our cargo test with the no capture that works fine and typically what I've done to develop to make sure that things kind of make sense and just do a print element and I can see that it does make sense and the character set looks like to be a b64 so again this is kind of a weak test you can beef it up later but at least it's a start and a placeholder so now what we want to do is to create a key that we can store in our config so for that we need to have a little utility and the cool thing is we're going to copy paste those two lines and we are going to use the examples infrastructure of cargo to have our little custom script and that will be a gen key that wouldn't work for everything but for this one it works perfectly fine so we are going to the Gen key we're going to use anyhow this is not for the application code and we're going to use the rancor then we're going to paste our code to generate the random 64 bytes we're just going to print it but again because we want to have a relatively safe character set so what we're going to do is Marsha is going to do a basically four URL and code and that will return a base64 URL string and that is what we're going to store as config we could use base64 as well that would be completely fine but I'm normalizing everything with base64 URL so now that we have that we can have our terminal and we're going to do our cargo run example gen key we have the array here that we cannot really do anything with it but then we have our base base64 URL uncoded string so now we're going to go to our config to ml and we're going to add our service PWD key and just paste it now it's going to be a little bit confusing in kubernetes because in kubernetes Secrets all the values are basic C4 encoded as well but confusion apart everything will work very nicely so since we are here we are going to do the same thing for the token key so I just ran it to have another one and we're going to have a duration in second which is going to be 1800 which is going to be obviously 30 minutes now every time I'm debating this is not really a secret should it go to the config map but I like to have the same close together so that is an internal debate that I always had with myself follow the best practice that you think is right that could go to the config map section so one thing very important though is My Strong recommendation is every time you store duration in string or primitive types like numbers always to fix it with a unit because the last thing you want is to think that the duration is in millisecond when actually is in second because in this case it will never expire so that is why Always to fix it with a unit such as there is no confusion into the scale of the direction so now that we have that we can go to our config and we are going to add section for Crypt key for password which is going to be a vect of u8 so the goal is to have the config relatively well typed such as a simpler to use and also we have the validation early such as it will fail early if the format doesn't match a type and then we're going to add the same thing for token keys and duration and again even into property while putting the duration unit and the type will be f64 so now in our config we need to create those and ideally what we would like to have is a get amp that will pass a b64 URL as an array of u8 and then we would like to have the same function for the token key and in fact for the torque conversion we want to have a get amp path that will pass the thing into whatever type we want so one we give the information that is a base64 as u8 array and then the other one is just a pass because we know the landing type here which will be f64 so let's create these functions so first I'll get amp basically for us u8 arrays we take the name and we'll return the result of the vector of u8 and then we can just do a b64 decode of our get amp with name such as if there is a error of missing that will take care of it so obviously our decode has its own error which we don't really need to pass through so what we're going to do is going to map the error into our error config one format so that is what we're going to use and we don't really care to know about the exact cause it's going to be usually relatively obvious so we're going to add this error to the create error so that is the application crate error so now we can also do our get amp path which will take the name and it will return a result of t such as we don't have to do get F path f64 i32 and so on but rather to have a t that will Implement from Str so that is part of the standard library and that will allow us to have the path now we can get the value with our get amp that will give us a string and now we can just do a Val path T since T Implement from string and the same thing in this case we don't really want to pass through the error so we're going to map our error with our error config wrong format and in this case we don't really care the cause most of the time it will speak by itself and eventually we could add another property into our wrong format which will say the expected type so far we're going to keep it simple like that so now we need to make sure that we import our form Str from the standard Library and now that we have that if we go back up everything Compares nicely and then we get all of these properties from our config and best of all they have the right type from the get-go so if they have the wrong format or they are missing the application won't even start which is what you want and now we can go to our Crypt we're going to go to our mod.rs and we're going to add a pub mod PWD so we're going to create it as a single file that's fine we're going to use the Crypt error and result the config and the encrypt content that we just created and now we're going to encrypt with the default scheme so we're not going to implement the multi scheme right now but we are going to encrypt our password in a future proof way such as we can Implement with the scheme later so the next thing is you could go production with that but because we will already have the scheme 1 would be able to add that later and everything can be gracefully upgraded so now we're going to have our encrypt password which will take the encrypt content and will return a result of string so now we are going to get the key and so the encrypt password function we know that the key is a PWD key that is part of the responsibility of this function and now the only thing it does is going to say let encrypted we are going to call our encryption method with our key and our encrypt content and now that we have that which is a string we could do OK encrypted now the problem of doing that is that we wouldn't be multi-skem ready because the encryption would be completely opaque so we want to make sure that we have a little marker such as we know which kind of encryption scheme we have been using so for that we're going to do a format and we're just going to name the first scheme that we have 0 1 like that and we're going to put our encrypted after so that is going to be the encrypted part so we didn't change anything about the encryption but what we did is we added a little marker that say that is our first scheme we don't have to say which Kimi is but at least now we know with which scheme this password has been encrypted so it says later if there is multiple schemes we can match to all schemes when we validate passwords okay so now that we have that we're going to create our validate password and that will validate if an encrypted content matches so we take the encrypt content which is what we have over there and then it will take the encrypted password and that will return for now a result of void which means that if it doesn't fail you mean that it succeed so first we are going to encrypt the password as if we didn't have the password reference so we are just calling the same function and then we're obviously going to compare if it's equal it's okay and if it's not we're going to return an error which is password not matching so let's create that we're going to bring our Crypt error add a section for our password and then we're going to add this password not matching okay so now that we have that we've fixed our error and now we have the validation that work with our password and our encrypt content so we are going to take our encrypt PWD and now we can go to the user and we are going to add a user-specific method which is going to be update password and it's going to return a result of void and it's going to take the CTX as always the model manager the ID of the user and then the password in clear and then in our function we're going to get RDP and now we're going to get our user for login by ID so this method assumes that we already have the ID of the user and we just set the password and now we're going to have the password which is going to be our encrypt password without encrypt content which will be the password in clear to string and the sort which is going to be PW salt which is a uuid tostring and since the result is a model result we're going to need to get our model error and we're going to follow the same pattern we're going to put script and it's going to be our Crypt error now what I like to do is to have my enums in the same order as my froms so that is just me being a little bit of CD so I'm going to move the from sqlx at the end and now we're going to implement the from Crypt to our crib viad without cost snippet so now that we have that that works fine and we're going to do a SQL update on the table itself where we have the ID matching and now the trick is we're actually doing a data and we're creating our Vector of fields with a PWD and the two string and then the N2 will create the field and then we do an exec and then a weight and then we're going to return void and here by the way if we over data just to be clear that will take a vector of fields so that is what we have over there which we create a vector and then the into these are from the tuples that will make it a field so now that we have that we can bring our terminal and we're going to do a PC coil on our PG database and then we're going to connect to our appdb and then we're going to do a select star from user and we see that we do not have any password obviously and we have the salts as we have said that in our SQL so now what we need to do is go to our Dev utils and go to our devdb where we initialize the DB we're going to create a constant for demo password which is going to be welcome and then within our init devdb we are going to add a section first we're going to init our model layer with our model manager and our root CTX and then we're going to set the demo 1 password and now it's going to be extremely simple we're going to first get the user ID of demo one we know what it is it's going to be 1000 versions to be safe so we're going to do a non-wrap but we're okay to do that in this case our first by username because one of the debut here so if it's first is okay and then we're going to do a user BMC update the password CTX model manager demo1 user ID and then our demo password and then we do the await and the question mark now we are going to add the detail info Trace where we're going to do set demo one password and so now we're going to do a cargo run quiet so if you have your cargo watch that will run as well and we see that we have our Trace over there and so this means that if we have our B SQL we are going to have to connect again to the same database because the database was dropped in a Brute Force connection that we had above is dead the cool thing is pcql keep it okay and then we just need to reconnect to the send database and that will work and so now that we have that we're going to do a select from user and now we're going to see our encrypted password with the scheme01 okay so now we have everything and we are ready to do our logging so we're going to go to our web layer in the routes login now in the API login Handler we are missing something we don't have the model manager the App State model manager so what we need to do is to make sure that in the routes over there we are getting the model manager such as we can set it as an App State so for now we're just going to add that there and we're going to fix main later and then for our router we're going to add with state or model manager and then we can add our model manager as a state the exam State obviously of model manager so now we are going to have the model manager there which will allow us to call the user BMC so now we can destructure it with State and that is kind of optional because it is a derive on state so we could get the DB directly so it works this away but now obviously we still have our routes compiler because in our main that we are going to open right now we are not given the model manager so we can fix that quickly in our routes Marshall is going to pass a modern manager clone that will fix that so now we're going to go to the implementation and the result will have this fake login we're going to select it remove it and we're going to extract the username and to PWD which we are going to rename PW the clear such as we don't get confused we get our route CTX which obviously we need because we cannot choose a user for login we need to have the system user and now that we have that we can get the user so we want the user for login with our first by username we do their weight and then we are going to do a OK or green Fair username not found now there is a very important Point here we do not want to log the username anywhere we want to forget about it and the reason why is because sometimes users enter password as their username so it's very important that when there is a ROM user name that is not found you don't capture it because you don't want the passwords inside your log files so in this case we just do username nuts file and we won't be able to debug why it's not found which is completely fine for the sake of the user so now that we have that we're going to do our Imports and then the first thing that we're going to have to fix is going to be the question mark and that is because it's come from our user BMC which is our model layer and in our web we're using result which is from the web layer with a web error so for that as usual we're going to bring the web error and we are going to add the section modules we're going to add our model and then the same thing we're going to add our forms that is going to be model modal variant and then we are done and then next we are going to add the login error that we have so now that we have that everything has been fixed and we can get our user ID because we are going to use it later and then we're going to use a lead Earth pattern to do a guard on the password and that means that if we don't have a password in user password we are going to return an error which is going to be user fail user has no password and now we can give the user ID because we have the US ID and that doesn't have any critical information we're going to go to our web error and we're going to add this variant and also we're going to add the next one which is going to be a login fail password not matching so like this we have both now one thing that I like to do in my nms for my error I like to use viant struct rather than variant Tuple so this way is extremely clear about what is a value and then there's an exception to that and that is when the variant goal is only about holding the value of the name of the variant so that is true for all of our errors that we are encapsulating like the CTX extractor error that we're going to talk about later or modules error so in this case a tuple variant is actually a very good fit so now back to our code we have everything that works fine and we are ready to do the validate so for the validate password we're going to call the validate we're going to create our encrypt content with a sort which is a user sort and then the content which is a password clear clone and then for the reference password that is what we got over there and because it was a lead errors pattern we get that now as a PWD variable and it's a string so we can give a reference of PWD and now in this case we could put a question mark over there but that will return a crypt error so what we're going to do is we're going to map it into our error for login fail password not matching and then we're going to have our user ID and then we put a question mark so now that we have that we can bring our quick Dev our backend terminal and our quick depth terminal we're going to comment out the do get index.html we don't really care and we're going to do a cargo run of our server or you can do the cargo watch and then on the quick Dev terminal we're going to do the cargo watch of quick Dev and we can see our response body which is Success true we can see that it set the fake token over there so everything works fine and this was obviously for our API login post and on the server side we have our Trace which is our API login with our request log line and that is our API login for API login Handler so everything is working fine now if we were going to put a wrong password press save then we got a service error over there which is what we want on the client because for now we didn't map it to what we want to send we will do that later so by default client get minimum information but on the server side then it will get the whole information about why it fail and we'll have that in our request lifeline so now let's say that we want to have the correct service error on the client side not as detailed but saying that it's login fail without giving the reason so for that we're going to go to our web error we're going to take these three volumes so that is our server error and then we're going to see that we have a client error so that is what we did in the rust XM course where we split the client error what is going to be sent to the client from the server error and so if we go there we're going to see that we have this function which is a client status and error and we have this match that will translate the server error to whatever we want to send on the client error with the status code so here we have one for the CTX extractor we'll talk about that in the next chapter and then we have our fallback which is our service error so that is why we always get a service error if we haven't added anything there the point of this design is that the lazy path is a safe path where if we don't put extra work then we don't put the system in extra risk now we can do explicitly we can say find to send login fail so for that we are going to split actually our section between auth and login and we're going to add that so basically that means that if the error match any of those variants and we don't really care about the values then we're going to turn the status code of orbital and our client error login fail and in client error we already have our login file and so we are all set to go and we are pressing save again make sure you restart your server and now we're going to have a login fail that's it very simple and very safe and then we have our server with all the information and the request log line with all of the information and that is it for this chapter so what we did in this chapter so we did the user for login and we did the encrypt and the validate and in the next chapter we are going to do the create token that we're going to set into cookie and then we're going to update our auth resolved to validate our token from the cookie so now let's go to chapter 8 which is going to be our secure token and web authentication so first let's do another view of what we have done so so far we have done the user for login the encrypt and the validate in this chapter we're going to do the create token and that is going to be about creating a token of this format so it's going to have three parts username expiration and signature all of these paths will be p64 neural encoded and the reason of it is that we want to have this token being as portable as possible so right now we're just going to put it in the cookie but later it could be used for a reset password for example or even signing urls and in fact the first part where we say username is more an identifier enough the fire of something so in our case in the case of a web token is going to be the username could be the user ID could be the user video ID or whatever it doesn't really matter but that is the identifier of the entity you want to have a token for so in a web token the whole thing is about having a token for user session a user authentication but we can use the same scheme for other things so for now since we are going to talk about the web token that is going to be our username and so we're going to put that in a cookie or token and it's going to be HTTP only meaning that the JavaScript won't have access to it so our strategy is to completely decouple the authentication part from the authorization part meaning that the token doesn't have to have any payload about the authorization of the user you just have to make sure that the user has been authenticated and nobody can tamper us with this part the username or the expression so then where the token is going to be validated on each request is going to be now auth resolved oh middleware CTX resolved and what that will do is it will call the first by username and get a user for us structure which has the ID the username and auto consult which is like our user for login but without the password on the password solve because we don't need it so for any of the requests that anesthetics we have the OS resolved middleware and you could also put it inside the static request but right now is for everything that is not static files and the point here is that even if it doesn't create the CTX it will not fail so that is safe to put in front of anything API login or whatever so the OS resolve the way it works is once you have the request that comes in the US resolve will get the US token from the cookie it will extract the three parts and then with the username is going to get the user for Earth from the database that will give the token sort and now with the token sort of the user and the token key at the application Level and can validate the signature so basically redoing the signature of username and expression and making sure that it fall back to the same one then when that passes it means that nobody modify username or the expression and then it's going to validate that the expression is still valid but basically the now is not greater than expression it's very important we're going to create a new one a new token because otherwise the session of the user will expire after a short amount of time or you will have to do the expression very big which is not very safe so what you do is you try to minimize the window of the expression and then every time the user requests something or goes to the server you update the token so that will update the cookie on the browser side so that is very standard practice but then when we create the new token with a new expression and since we have the token sold we can sign these two parts and have our signature and then we send it back now it's important to note that the expression is different than the expression of the cookie so in the cookie you are going to have an expression that usually would be the same or little bit greater than the other expression but this cookie expression while it's important to make sure that the browser keep the token long enough such as the user doesn't have to re-log in all the time it doesn't really matter from a security standpoint because the source of Truth is what is in the token this expression so that is what gives how long this token is valid for and nobody can change it because otherwise the thing that you wouldn't match and again the cool thing about this pattern is that this token is quite generic while using it for web authentication but it can be used for anything now the first thing that we're going to have to do is manage all of the time formatting in string and parsing and so on and the basic C4 as well so those are good candidates to be part of our application utils module so we're going to start that now so we are going to go to our main and in our main we're going to create our UTS model and that as your application code grows that will grow quite a bit and those are the generic tutors that doesn't really fall into a specific module and so time and base64 are good candidates for that obviously but eventually you are going to have more so we're going to make it a YouTube mod.rs so now that we are going to drill down into it and there's going to be couple of libraries that we're going to use we already have the base64 URL and we're going to add time Library you could use Chrono while standardizing on time is the way it works well because we are on the UTS module we are going to follow our best practice we're going to have a code regions for modules and we're going to have our error so like this we have our clean utils error we're going to re-export it as usual and we're going to create the files and then with our little cut template we're going to create the placeholder so that everything Compares nicely now that we have that out of the way we are going to use the time library and we're going to use a profile of iso 8601 which is going to be the IFC 3379 and then we're going to use from time to Direction and the offset date time so now we can create a code region for time so we'll have a code region for time and the code region for base64. eventually some of these utilities might go to their own some modules so now we're going to do a little bit of silly functions which is going to be the now UTC that will just return an offset that time and that will just call that and then we're going to have a format time so this one is very simple as well but quite useful because that is where we are going to normalize which ISO profile we are going to use so in this case we are using the IFC 33 39 so the application code just calls the format time and then we will normalize everything thing with this profile now we are again breaking our walls here where we're doing an unwrap because we want this function to be as simple as possible to just return a string so right now I think it cannot fail because this time is time that we will produce anyway but I'm going to mark it as a to-do to make sure that that is safe such as it won't fail now on the to do usually I have two kind of levels of to-do's I have to do like this where I need to look at it at some point and then if I want to increase the priority of it I call it fix me so those are the two type of priorities so fix me means the application shouldn't run if we don't fix it and to do is something that really needs to be looked at but the application could run without fixing it obviously that is a personal practice right now make your own as long as you are consistent it's completely fine and now we're going to do another one which is very specific to what we need and it's going to be a now UTC so we always have the UTC and plus the second and then we'll return a string and that will take a second and f64 which is going to be in our format so for that we're going to use a Time API with using now UTC plus iteration and then we do a format time which is obviously what we have over there such as again everything is very normalized it's starting to look pretty good and we're going to have one more which is going to be the path UTC and again we want to make the application code as simple as possible so we are going to have the path which will take a moment which is just a swing and it will return a result of offset time because this case can fare quite a bit and obviously the result will be the result type of the utils because we have our own result and error for utils so now we have our offset date time and we just pass with the same profile that we have over there so this way everything is super consistent so now obviously the path has a different error so we're going to map the error with our unit here's error which is going to be dead file path we're going to give the moment as a string and we don't really care about the reason we're just capturing the strings that cause a problem and then that will be enough if you need more information those things can be changed so now we're going to have create a date field path we bring our error we're going to have a section for everything time and then we are adding there and in this case I'm putting it as a topper by end because it's clear enough but that could be a by instruct as well so now it's pretty cool we have all these functions that manage our time such as everything is very well normalized now because I have a small there's a little temptation of adding an annotation like that in line or in line on waves and all of the literatures that I've read say that the compiler knows better so except if you really know what you are doing you can add those annotation otherwise let the compiler do the right choice now let's do our base64 functions and that would be the p60 for you and code and that would just be a pass through this is the best 64 URL and code and then we are going to have the decode and the decode could fare obviously so we want to have a result and then we are going to have our decoded string we're going to use the library base64 URL we are going to decode it that will return a result of vector of 8 with the decode error and we don't really care about what the error is we just want to capture that it failed so what we're going to do is we're going to do a OK which is going to turn that as an option so now we have an option of vector 8 and because we want to have a result of string we're going to have an end then and we're going to take our Vector of 8 and we're going to do a from utf-8 and then we're going to turn that as well as an option so that's why we use the and then so now that we have our option of string if we have none we want to return an error so again the strategy is that we do not care about the error we just care to say if it's fares or not so what I'm just going to say okay o and we're going to return our own error which is going to be fair to b60 for you the code and then we do the question mark now obviously we need to create this type and in this case we are not going to capture the text we could but because it's going to be used everywhere I don't want to capture these things because then it can be logged so now that we have that we get our string back and now this is when I remove my to do and I do a OK decade string and now if I go to my Crypt and I go to my mod.rs we can start doing our token module so we're going to create it and we're going to get into it and that is going to be where we have the whole logic about the token we are going to pass it create it and validate it so we're going to use a config because we need the key we're going to use a crypt error and result and the first code region is going to be our token type and then we will have our web token generation and validation so that will be for the web token but in fact again the token is quite generic so we're going to have a code region which is going to be private which is going to be for token generation and validation so in this code region we are not going to know the specific of a web token and all these kind of things and in the web token we will know which keys to take an assumption to make to create the token so that is how we are going to split it and eventually that can go to some modules right now it's going to be relatively simple so we can have everything in one file okay so let's create our token is going to be a struct token and the string format will be what we have seen before to these three parts the first one will be the identifier which for web token will be the username then it will be the expression a string to those will be the decoded ones and then we have the sine b60 for you string so the first one is the identifier which will be the username the second one is the expression which will be in IFC 33 39 and then we have the signature which will be in base64 url so the reason why the signature is base64 and the rest is decoded is because the signature is just a black box we're just using it to match it the reason why it matters that is basically for URL is such as it can be put anywhere and we don't have some weird characters but outside of that we just need to match it with a previous signature so that's why we just need to care about the basically for you when we do the match as long as we are comparing the same encoding so now that we have that we are going to make two fixed means that we are going to do later the form string that will do the path and the display to string and we'll do that later now if we go to our private token generation and validation we're going to have a generate token and in this case I'm going to start a function with underscore because it's private and because it's going to have the same name as a public one so that is one best practice that I have and it's really a personal preference I don't always put underscore in private functions except when their name match the public ones so you could do that and other ways to say generate underscore token underscore in air sometime but that might have also meaning as well so we're going to follow this best practice which is when we have the same name we have the underscore so the goal of the function is to return the token a result of token and it will take an indent which could be username in the case of a web token it will take a duration Instagram so again make sure to always have the suffix for the unit of the duration it's going to take the salt but it doesn't have to know if it's a use of salt or another salt just take one salt and then it will take a key and again it doesn't have to know where the key come from you just take the key and because we need to sign and so on we are going to do the implementation later and we're going to have another function which is going to be token sign into base60 for you that will return a result of string which is going to be basic C4 URL format it will take the indent the expression the sort and the key and so for this one we're going to add a command which is going to be create a consignator for token parts and salt so now that we have that we are going to create our validate token signature and expression and that will take the origin token and the salt and the keys such as you can resign it again so that will first validate the signature is okay and then it will validate that the expression is not greater than now so now that we have these placeholders can actually do the full implementation so for the generate token it's going to return the result of token it's going to take the user username and the salt and then we'll get the config and now we can just call underscore generate token with the user the direction from the config the sort and the key and then we can do the same thing for validate token which is going to take the origin token the sort and the user will get it from the origin token and the expression as well so we get the config and then we are going to call our validate token with the origin token the sort and the key that we get from the config so that's it that is our web token generation implementation now the reason why we delayed the implementation of the private functions because we need to implement the display and from Str en let's start with the display which basically will give us a tostring so we're going to do a Implement display from the standard Library for token we are going to do the lazy way and here's the little technique that I use when I do parsing or size session in this case it's relatively simple but sometimes you want to check and have a very quick iterative way So Below on the same file I'm going to create my unit test and I'm going to do a test of token display okay and now I'm just going to have for the fixtures I'm going to create a silly token with whatever string I want it doesn't really matter and then I'm going to exec and I do my debug print with my little prefix over there that will print whatever the display is implementing now that I have that I can bring my terminal and I'm going to do a cargo watch quiet clear execute test and then I'm going to do my test token display a cool app say display OK is the same thing so now that I have the no capture if I run that is going to fail because we have our to do in our implementation so that's fails but now if I do a right hello for example we have our printed and hello so now I can iterate very fast like that and I have that below and I can just write my code press save and I see where I'm going and then eventually I do the asserts and remove the print so in this case we're going to do the right we're going to have our three parts and we're going to do the base64 encode of the two first one and we're just going to take the latest one make sure you import the function b64 Union code and now if you press save then you get our print over there and you can check that kind of makes sense and if we are okay with it we're going to take it copy it and then go to our unit test and we're going to add a fixture with our token Str and we're just going to do an aesthetic wall of the two we print save it passes and then we can remove our printed in and then that's it that is what I do a lot on these little functions that have parsing or sterilization or whatever such as it has me code faster and then at the end I have a free unit test now to be exact this is not exact only is exact uncheck if we really want to be OCD here okay so now the next one that we're going to implement is a from Str which is going to give us the path this one is going to be a little bit more advanced so we're going to do Implement from Str for token we do the implementing missing our error will be our Crypt error so we will add some and then if we go to our form Str so we're going to actually add the error right now so we're going to have a error section for token and those will be the token invalid format token cannot decode indent and token cannot decode expression so now that we have that we're going to rename s to token STL just to be clear and we are going to first split it with the dot and if we don't have three elements we are going to say invalid format we could use regex but in this case that works perfectly fine and then we're going to do this one liner just to extract the three elements and then we're going to return an OK of self which is going to be the the decode of the intent and we're going to map it to our error that we just created make sure to import the code and then we are going to do the same thing for the expression date and then we're going to take the signature which is in b60 Formula format and now we can remove the to do and the semicolon and then that's it now just for completeness we are going to do the unit test I could have done that live as well but we're going to have our test token from Str okay our fixtures we're going to take what we had before we know where we want to land and then we're going to exec it so we are taking the token string pass it now so that is a nice things with API and because we have the token type everything works and now we are going to do our check and we could do an asset equal like that with token an fx token if we press save we're going to have an error and the reason why we have an error is because token doesn't Implement partially equal and doesn't even Implement debug which is needed when you do that so if we bring our token code that we have above we could add a derived debug and partial and if you press save that will work but now the things that I don't really like and that might just be me is that the partial or equal is a trade in the macro that you are using in your normal code when you want to compare struct so it's a useful thing to have in normal code but in our case we are not using it for application code so it could be misleading because we could think that we are using it to compare tokens which we are not we are just using partial equal just for unit test so I'm not really a big fan of that and the bug is a different story because the name debug is very clear that the intent is for debug so I'm okay to have debugs in my code because might be useful for print or any test and so on but partial equal is a misleading thing because it's something that might be used in a normal cut path so that by the way might be just me so what I like to do is I'm going to keep the debug but I'm going to remove the partial equal because I'm not using it in my normal application code and now the trick that I'm using and that might look very ugly for some and if he looks ugly don't do that use your partially core completely fine but the trick that I like to use is I'm using the debug formatting as a comparison frame so I'm going to format the token and format the fixture token with the debug notation and that will go 99 percent of the way but this is really a personal preference do what you think or what your team think is right to do the important things again is to write in the test and now that we have this unit test we can bring our terminal and we're going to do a cargo test and we're going to test everything on the Crypt and then that's it we have our three pass test of token display and token from STL okay so now that we have this token type done we can go back down and Implement our token generation and validation but the first one is going to be generate token and we have our indent the duration the salt and the key so we have everything we need so we are going to compute the two first components I'm going to take the indent which is going to be a string we're going to take the expression date and so we're going to call our time utilities to add the duration in second that will be a string as well and now we're going to sign the first two components and that would be by calling the token sign into b64 U and now we can just return our token which will be our intent expression and sign b64u make sure to import a function and now we have this function done so we can implement the validate token signature and expression for that we are going to add couple of more errors variant token signature not matching token expression is not easily and then token expired so that would be easy three types so we're going to validate the signature first so the way we're going to do it is we're going to sign it with the origin token indent the origin token expression and then the salt on the key that is passed because that is not part of the token struct obviously and then we are going to compare this new signatures that we're just getting with the one which is in our origin token so as we can see we don't need to decode the b60 for you we are just matching it and if it's not equal then we are returning our error because it means that someone changed something there so now that we have that we're going to validate the expression and to validate the expression we're going to do a path GTC which is our utility from the expression date because now we know that it hasn't been tempered is a passing fail we are going to return our Crypt error that says that the token expression is not easily so there's a formatting problem so that's why we have the question mark and then we're going to get the now UTC we're going to compare the two dates and if it's not okay we're going to say token expired and if it's okay everything is good make sure to import the utils and so now the only thing that we need to do is our token sign so we have all of the arguments that we need and again we don't know about web token in this case it's just generic Arguments for token scheme so we're going to create the content to be signed so we're going to keep the same format that we had on the science session which is the two parts and then the two parts will be base64 and then we're going to call our encrypt we're going to use the same encrypt method that we have if you want to be more performant you can also use just shaft 512 pass without the hash mark that is possible as well and in this case I will recommend to still use a sword and even the keys as sort in a way keep the same model regardless of the implementation details of the encryption so right now we're going to use the same one so we're going to give the key and now encrypt content which is the content in clear and the sort make sure to do all the Imports and now we can just return the OK signature and by the way on the content my strategy was to still encode each part as b64 URL uncoded that is kind of optional it could have been just the indent and the expression but this way if I wanted to just check without even decoding that would have been possible in our case while decoding anyway so it's not really useful and that is important to note that is completely unrelated to the final b64 URL encoded of our signature and that is what you want because again you want to have the signature as portable as possible now that we have that we're going to write some unit test and again the goal is not to have 80 coverage at the beginning but to have the key unit test to make sure that things kind of make sense so we're going to test the validate web token okay our fixtures the user doesn't matter the salt doesn't matter the duration we're going to make it at 20 millisecond we're going to see why later and then we're going to have our token key where I'm going to take it from the config then we're going to generate our token as a fixture in this case there will be an argument to say well we should have a full token already generated and a hot coded there but so far that will work pretty fine we're going to exact and what we're going to do is sleep for 10 minutes alone so why not pass 20 minutes ago so everything should be fine then we are going to validate our web token and then we're going to check the check is just a question mark so we're going to import the thread of the standard thread in the duration and this is not an async function it's just a normal test so if we look at our types over there that is all our fixtures we have our token key as a vector of 8 and then we have our token as a token type that is where it works so everything is working very nicely and now we're going to do the expression test and that is very important to do early because you don't want to think that your validation works and it will never check the expression date for example so we're going to check that right now so we're going to follow our convention we're going to do underscore air and we're going to add the suffix which is expired now we're going to change the direction to 10 millisecond and then we are going to sleep for 20. so now that should fail so what we're going to do now in our check we don't want to fail the test if it fails because we want the rest to have an error so we're going to replace it with the assert macro which is just assetting a Boolean and then within it we're going to have the mattress macro and the mattress macro will match the result to this enum and then if we have this error we're going to print this message so very simple so now if we do a cargo test Crypt we see that everything passes so sometime when you pass the first go which happened a lot in rest I'm so by no time I'm going to change it to make the unit test break and then I put it back to make it pass but in this case you can trust me I've already done that okay so now that we have the token done we're going to go to our web mod.rs and we're going to add two functions set token cookie and remove token cookie so set token 2K is going to return a result avoid I'm going to take the cookies in and then the username and the sort we generate the web token and we're going to create our new cookie with the US token and we are going to set it as HTTP only so that will increase the security the JavaScript won't be able to access it and then the very important part is to put the path as root otherwise by default it will set the path of the API login folder and then you are going to spend quite a bit of time to try to debug on why it doesn't work or why it doesn't log off and so on and then we're going to have our cookies add now we're going to import so the cookies is from the tower cookies we're going to import our generate web and the cookie type is also from the tower cookies so now the generate web token return a result which is not the same of this one because it's from the Crypt module so same as before we're going to open our web error and then in the models section we're going to add Crypt crypt error and then we're going to add our from with a little cut template Crypt crypt and done and then that's it we have that that's getting fixed and now we can go to our rats login and our API login Handler we're going to go down and where we have the fix me we're just going to replace this line with our web set token cookie the cookies the username and the sort and now we can change your comment set the web token so let's go back to our request flow to know where we are so what we just did is the login Handler this one and so now we set the right token over there the fully encrypted one and now what we need to do is making sure that in the OS resolved we are going to take it and validate it so that is what we're going to do now and then eventually we're going to be able to do the API RPC without auth require and that won't really need to be changed because we already have the logic over there so let's go back to our auth resolve and for that we're going to go into the MW underscore auth dot RS so we have the CTX require and then the CTX resolve and then the CTX extractor so the CTX resolve and the CTX extractor work in pair in a way so the CTX resolve will resolve the CTX so it will create it after it validate the US token and then it will put it into request extension and then the CTX extractor is going to block it out so we're going to see that later and then we even split out the CTX extractor result and error so that the main web error doesn't have to have clone and all of these things that are more specific to extractor only so that's the nice way of splitting out the extractor error and we're going to see that later so now if we look at our CTX require is extremely simple you just take the CTX and do a question mark and the result is a result of the web layer and it's a result of CTX and obviously the content in CTX so this middleware just check that there is no error in the CTX and otherwise it will return early and then we have our CTX resolve and that is where we're going to do the heavy lifting of doing the token validation so that is all implementation which is a fake one but we're going to change that that is the expensive part where we are going to have to use a model manager to access the database to get the user for auth and then obviously we have the tower cookies such as we can set the cookies and then we have our request mutable because we are going to use it to set the CTX result so now to better understand how it works let's look at the CTX extractor so it's just an XM extractor for CTX and then inside what we're doing is we're going to take the path extensions and we are going to do a get on the CTX exe result so that is what the CTX resolve will put in into the request and so that is the same pattern that the cookie is from Tower cookies uses you do the expensive part once and then you put it in the request as an extension and then you have the extractors that just pluck it out so then when we do the get CTX XT result that will return an option and so if there is no CTX CXC result we want to fail early and return an error so we're going to do an OK or and then we're going to have our web error which is going to be a CTX EXT variant and then we're going to have this error CTX not in request request extension so this way we are capturing the error in case there is no CTX exe result in the request extensions and so we do our question mark to return early the web result and then if we pass we are going to clone which is going to clone our CTX EXT result which is what we want because we want to return a result of CTX and then we are going to map the CTX EXT result error into a web error so that is what we're doing over there so like this everything is clean it's returning a web error for this extractor and the web error has a CTX EXT variant and then if we look at it we can see that it's just a wrapper on top of our CTX exe error so it's a little bit of a mouthful but it's actually relatively flexible and we split away all of the extractor CTX information while keeping all of the errors allowing the CTX resolve to not fail but still capture all of the possible errors that we might have so now that we have that we can look at our CTX extractor result and our CTX exe error so those are just the variant of this enum that is error and this is not even a full error with a whole border plate so we just have the Clone we make sure to have the cell eyes because those will need to be stylized for the server request log line and then on top we just have a type Alias for the result of CTX of this error so now that we had that we can Implement our CTX resolve the way that we're going to do it is we are going to have a private function a inner function we're going to follow the pattern that we have which is the underscore CTX resolve and the goal of this function will be to take the model manager because you will have to get the user for us then the cookies from Terra cookies and then it will return a CTX exe result which the mwctx resolve middleware will put in the request so this way we can use a question mark in this function and then on the parent one the CTX resolve we'll do the rest of the logic so now here's what we're going to do in this function we are going to get the token string path the token we are going to get the user for us from the database we're going to validate the token and then we are going to update the token in the cookies again we want to update it such as the session doesn't expire and we can keep the expression window relatively small and then we're going to create the CTA cxt result such as our CTX resolve calculate in the request extension so it will not fail it will not break if that fails that it will capture all of the errors so to get the token extremely easy we're going to get the cookies and then we'll do a get us token then we're going to unmap it to a string and that will return an option of string and if we have none we are going to return the token not in cookie question mark to return early so far so good so that will give the token as a string now we are going to pass the token so we are going to Shadow the token variable and we're going to define the token type and we are just going to do token path now we're just going to do a temporary unwrap right now we obviously do not want that we're going to fix that shortly later but I just wanted to add a little note so in this case we start from a string and because our token token type has a from Str we can use sparse and the compiler will know that the path is supposed to return a token now there's another way of doing that is to use turbo fish and that is we don't put the type over there but we are putting it over here and that is completely fine so that is really a preference at this point so for this case it's more kind of a personal preference in my case sometime I like to have the type on the left but I'm not too dogmatic about it so now obviously we do not want the unwrap that would go against everything we are trying to do so we could put the question mark but in this case because that is a crypt error and we need a CTX result error for CTX extension result that wouldn't work and we don't really want to have Crypt as a CTX extension error so what we're going to do is we're going to do a map error with a token wrong format so that will be enough for us and again in this case we don't really want to capture what the token is because we want to be extremely safe but there will be an argument that it should be okay to capture it but right now I'm airing towards the safety so I'm not going to capture it so obviously we have to create this one so if we open the one below we're just putting it there and that will be a token wrong format and in fact we're going to add a little bit more variance that we're going to need later which is user not found and again here I'm not capturing the name and then we have our model access error if something happen when we do the database connection and in this case I'm making a little exception is I'm using string rather than the model error because I don't want to have the full modular over there but that could be agreeable and eventually if you need to have this type of information just have the model error over there so then after we're going to have the fair validate and then the cannot set token cookie okay so now we should be all set with our CTX extension error so that compiles so we're all good and now we are ready to do our get user for us so we're going to use our first by username with root CTX because obviously we don't have a user and then with our model manager and our token indent which is the username that will return our user for us we do the await if there is an error a SQL access error or something like that we are going to map it into a CTX exe error and in this case as I said before we're just going to do a two string on the moderator we can change this strategy later that will be an easy refactor to do if we need to have the full type version of it so now because our first by username is going to return an option and we want to make sure that we have a user we're going to do we're going to turn our option into an error with OKO and then we're going to do our CTX EXT error user not found and then we are going to do the question mark to return early and again in this case I'm not capturing the name just in case but because it's a token the probability is that it comes from a user input is extremely low but anyway I'm being Ultra safe here making sure to import the user for Earth and the user BMC and now we can validate our token so the validate web token is that simple we're just doing a validate web token and we are doing our map error we make sure to import it just take the token and the two consoles that we got from the database to string and now if we are at this point we have passed the validation and we need to update the token to make sure that the session can continue and so we're going to do a set cookie we give the username and the same token sold and that will set the new cookie and in case of an error we map it to cannot set token cookie making sure to import it and then we are all set to create our CTX exe result which is independent of the web layer so that's why there is no cookie or token or whatever it doesn't really matter now it has passed and so we're going to create our CTX new to user ID and that can fail because if the user ID is zero then it will fail and sometime in match file for other reasons one we have more advanced CTX new and so we are going to have a map error of CTX create fail and so we're going to do a EXT to string again and just have a string okay so now we have done the hardest part so we can go back up the CTX require is all done nothing is changing over there and then the other thing that we're going to change is in our CTX resolve so now we are going to use the model manager so we're going to remove the underscore and we're going to select all of that delete it we don't need it anymore we're going to get our CTX EXT result which will be a result of CTX and CTX exe error now the important point is as a CTX resolve shouldn't fail if there is a failure of the CTX resolution because that is the responsibility of the CTX auth or other things Downstream so it's very important that we do not put a question mark over there and when production optimization we want to do in this function is that if we have an error on the validation of the token that come from the cookie we want to remove it from the cookies because we don't want to keep validating the same invalid token over and over because if it is invalid once it's invalid all the time so we're going to do a if CTX EXT result is error and if it's not matching token not in cookie if the token is not in cookie we don't need to remove it but if it is we're going to remove it so that is what we're going to do like that so it's very simple but this way you are giving a little bit slack to your server where it doesn't have to revalidate invalid request for the time and now that we have done that we can store the CTX exe result into the request extension so we're going to do a rack and we insert our CTX vxt result and that again is an instead by type so it's Unique by type which is completely fine in our case because we have our clean type just for our CTX exe result so now this function didn't fail but capture all of the information of the authentication if it fails if it passes and everything else so now that we have that we can test it quickly and we're not going to do a unit test we could do a unit test but right now we are going to do it the quick Dev way so we're going to go to our main and we're going to add back our rods hello and that is going to be right hello that just going to return hello world and then we're going to have a right layer of our middleware the CTX require and so we are making sure to import everything we need and now we still have the red line because we need to add it to our Watts all so we add it like that our middleware for the CTX resolve already had the burning manager state so we are all good and now we can bring our quick Dev database and we're going to do our client do get on hello after the login we're going to bring the server and the quick depth terminal we're going to do a cargo watch on the server side cargo watch on the quick Dev side and then everything is working nicely so we have our response hello world we can see that we have our cryptic us token that has been set because for every request we set it again we refresh the cookie and then if we go back up we see that we have our API login that set the initial token so everything is pretty good and then that has a success rule so everything is looking pretty good and on the server side we have this trace for the hello world and everything is working nicely the mctx resolver this has been called and the CTX requirement has been called as well and everything has been good so that's it and if we check the first one we have the API login Handler that only has the CTX resolve because even for login we do it anyway because this way in the log line we can capture which user is calling what even if it's login or lock off but because it never fails even if you are not logged in you can log in and now we can have a little fun so we can put our do get hello before the login press save the login which is a second request succeeded but the hello was before and we got the North under hello request and on the server side we can see that the API login was fine it was a second request and the first one we get all the trace and in the request log line on the server side we get a client error which is what we're sending to the client but we also get the full error of why it failed and that's it and by the way as we have seen in the rust accent course before everything is linked by uuids so you have the rec uuid which links everything so then after you can debug things very nicely and most of this magic is implemented in the web error and in the MW response map and we're going to see that later but that is what we did in the rust accent course if you want to look at it okay so now we can Implement our log off and that is going to be relatively trivial so we're going to go to our rout login so that would be for both in our routes we will add another route for log off so let's implement the Handler first we're going to put the login in its own code region and then we're going to create a code region photograph so the first thing that we're going to have is a log of payload and it's going to be a city one with just a flag which is log off which it will always be true probably the reason why we want to do that is because we want the log off to be a post and we want to have it a Json as well so long story short and I don't want to go too deep into that but application.json post are pre-flighted by the browser meaning that they have some protection across site scripting so by making our post an application Json post by taking a Json in then we're making sure that it will require the content type application Json so that is a little trick this probably also way to do it but this is a nice one so now there's another method that we need to do is remove cookie and so reser2 set it there we are going to go back to our web mode and for symmetry we're going to make it in the same place that we have our set cookie so it's going to be very simple let's remove cookie we're going to take the cookies we don't need anything else it's going to return void and then we're just going to create our Tower cookie with names since we don't need a value and we do not forget to set the path slash and that if you forget that again you are going to have a little ride of trying to debug why logo doesn't work in some cases and it can text quite a bit and then we're going to do our cookies remove and that's it so that is what I like usually I like to have symmetry like this so even if remove token cookies only something photograph I put it there because it's part of the pair so now that we have that we're going to write our API logo of Handler we're going to take our cookies our Json payload which is a little bit silly but useful then we're going to have our debug Trace that we are going to get our log of flag which again is kind of a weird thing because what the point of posting your log off if you say logo Force but but we're getting the value in other ways and so I'm respecting the flag here anyway and then you say if should log off we're going to remove the web token and then we are going to return a success body which is going to be this format and now that we have that we don't forget to do our OK body down there and now we go back up and in our rods we just add this one which is going to be a post API logo of Handler and now if we bring our quick Dev and we're going to put the hello after we're going to check that it works and if you run the quick Dev terminal we're going to see that everything works so so far so good and then if we do a log off just before the hello but after the login we don't forget to execute the request and now if you press save we're going to get our no auth the same to us that we had if we didn't log in so that's it we're going to comment this one out to make sure that it works and now we're back to normal okay that was a pretty big chapter but a pretty cool one so we have done our create token we did the full us resolve with a web token validation and then we did the full login and log off so now we are starting to have a very strong base for real production application okay so now that we have everything set up we can do our chapter 9 which is going to be our json-based RPC for the task crud API so what we have done so far is we have done the login log off with the OS resolve and now we're going to do the API RPC the handlers which is going to use the CTX require that we have already done so for API PC we are going to use Json RPC but what is Json RPC so if we step back today these three popular protocols when you want to have a client which could be a browser a mobile app or another web service talking to your service you have the rest protocol which is probably the most known and the most used you have graphql and you have grpc which is a Google RPC which generally uses photo buff and then you have a less known one which is just a variant of rest with a RPC layer on top of it and it's actually extremely simple and it does solve quite a bit of problem So today we're not going to talk about graphql and grpc and then we are going to implement our API with Json RPC but first let's have a quick introduction to rest such as we can better understand just an RPC and how close it is to rest so in rest when you want to implement your crude API which is create request update and delete you generally have your create method for example that is a HTTP post on an endpoint which is typically you have a prefix for example API and then you have the name of the entity prior yours and then you are sending your data in the Json format and then you will receive the data in adjacent format so the API can send the whole data back or the partial data back or sometimes the choice might be to just send back the ID so that is an implementation detail but the point is you have a post on the endpoint of the entity and then you send your data and you receive at least the ID then one you are going to do a request of one you do an SGP get the pass is going to be the same one with the pre-roll and you are going to have a slash and then the ID whatever the ID it is if it's a number or string or uid doesn't really matter but that is the ID of the item and then you will receive the Json back and now if you do a request on many you will do an HTTP get on the same endpoint with the task with s and then you will have your own query parameter format for filter limit and so on and that will return an array of data and then for the update depending of your strategy you use put or badge and then the same path where you have the task with the S the ID and then you send either the full data or the partial data so that is a strategy that you have to agree with your API and then eventually you receive the data or you might receive just a Nokia signal or just the ID depending of what you want to expose and for the delete very similar you will expose an HTTP delete on the API and the ID so that is rest in a nutshell so now if we look at Json RPC and Json RPC is a standard you can find more information on jsonrpc.org and this is an RPC is very close to rest but you have only one endpoint so it's a post it's an HTTP post on some URI so for example for us it's going to be API RPC but it can be anything and then you are going to send a Json and the Json RPC just Define the first level properties and so the first one is going to be the ID which is the RPC request ID which is given by the client it has nothing to do with the entity ID and it can be a string a new or even new and the contract is that the server has to send it back into the response of the request so this way there's also a way to batch commands and then the client can find its doc like that but this is very flexible so you can pass one or empty string or new and that would work fine if you don't want to use this feature then you have the method and that is whatever name you want so in our case we are going to follow verb first create snack case and then density and in some cases we might have some suffixes but just an OBC doesn't dictate any kind of format so you can follow whatever format you want if you want to follow camel case or whatever that is up to you and then after you have params which is just a map of your parameters that you want to pass to the function so it really map to a function call now Json rpc1 allowed to have an array of parameters and just an rpc2 added the object which is what we want in our case and then what is inside is up to us the application developer and then as a response when there is a success the server will have to return this format which is the first property will be the ID whatever the client sent over there the server doesn't check it just had to check that something is there and then it has a result and the format of the result belong to the application so it belong to us and then if there is an error then we will receive a similar error with the same ID that was sent from the client but now result will have result we have error so the specifications say it's either or if it's success you have result if it's an error you have error and then in the error format the spec Define one level down where you have a code that must be an integer there is a resume name space from some standard errors and then the rest is the application and then you have the message can be anything you want in our case it will be the enum viant name client error and then the data is optional can be whatever the format the application wants and in our case it will contain the request ID and the detail which will be property of our client in it if there is some now the great thing about this approach is that the method name map a function call so for example if we have list task we just have list and task with a S and that's it and sometime you might have something a little bit more specific which doesn't really fit into a curd like a log task which is while locking a task and we don't want that to be only a property but a specific call and that is as simple as doing that because often this type of methods you are going to have them ultimately and they don't really fit into crude and in fact crud is often the beginning but then you are going to probably need custom method as well so in this way HTTP is just a transport protocol and then you have a clear format of the call and the responses and that's it this is basically Json RPC in a nutshell and you can find more information on jsonrpc.org and with that we can go back to our code so we're going to go to web or module and we are going to add a pubmod RPC we are going to make it Moda PC because we're going to have some modules we go into the file we're going to create our module we're going to use the web error and result for this module that's completely fine eventually we are going to create our mod task RPC over there but for now we're going to do some prep work so first we are going to create our RPC types and we're going to have our Json RPC request body which is our ID method and parents and so that is what we had over there and for now at this level we just want to have the option of value which is a Json object generic object and then we'll do the final parsing and the RPC rounding level we're going to see that later so we are going to import the allies and the value from Json so now that we have our params we are going to have our params type and for that we're going to have different type of params and we are going to categorize them and that can grow over time obviously but the first one that we have at the high level is the params for grade so that will be all of the apis that want to create something so it's going to be over d and that will be our data property so we are not specifying data at all at this point and then we're going to have a Paramus for update and the powerful update in oxyn actually we're going to say it takes an ID and then the data and then we're also going to have a parent's ID which is for all of the apis that just pass the ID for example the delete and the get so for that we are going to have a struct which is param's ID and we don't need to have any data type on this one so with this design we are normalizing the params structure that we get while we keep some of the properties like data flexible to be pair function or perco so that is already a good step so now we can go back up and we're going to uncomment our mode task on PC so now in the task RPC we're going to use the CTX the model manager the result and more that we'll do later we are going to create our task and that will return a result of task so in our design what we are deciding right now is to say the model controllers are going to be very granular will only return ID on create but we want our API to the outside world to return the data back so that is obviously a design choice either way works I'm just showing one way the only important things again is to be consistent we will be returning OK task we are going to compute that later that will take our CTX our model manager and our parents so right now the CTX and the model manager since this function will be the end of line for CTX and model manager we are consuming them and not taking the reference this could be refactor later if that doesn't work and then we have our params which is from parents create we can restructure our params so that will extract the data from the parents of create so we're going to get a test for create into Data and now we have done all of the hard work in the model layer so that was the goal of the model layer so now you can do taskbmc create with the CTX and the model manager and our data which is our task for create that will return the ID and then we are going to get the task that has been created with the ID that we just got and then we are returning it and then that's it then we do the Imports and that's it we did the create so now for the list that is going to be as simple we don't need any params on this one for now we're going to do the test BMC list and that will return the vector of task that we are just going to return then later in future episodes we'll go to see how we can add filtering limit offset and order by over there but that is for the next episode but for now that will work very well and then we have the update task which is the same thing is going to be the CTX the model manager and the the parents which will be parents for update now and with our data type which is task for update and right now we're making a course at our RPC API will return task again that is completely optional you can go either way so we are going to destructure the ID and the data from the parents argument and then we're going to do task update and we are going to do a test get so this one is done and now we're going to do the delete and then the delete is going to be very similar we're going to have the CTX and model manager we're going to have our params now which is going to be the params I did and then we're going to destructure it do the get do the delete and then return what has been deleted so again this is a type of API we want to expose but if you don't want to return what has been deleted is complicated as long as you are consistent so that's it we have the crud eventually you could implement the get as well we can do that later but for now that is a good starting point so now we can go to our mod.rs and what we need is to do the routing now and it's a different routing because the first writing is on the URL of the path and now we need to do a routing on the body because we need to get the method name from the body so our PC Handler is going to return the XM response it's going to take the model manager as a state it's going to take the CTX and it's going to take the Json of the RPC request so what we have done in the task RPC are not the axon handlers they are more the RPC function and lows and that is the Axiom handlers the endpoint of API RPC so that's why we're getting the RPC request at this level we are going to do the Imports so the state obviously is from axum and then everything is kind of obvious and the Json is from axum as well not for SQL X is a one from Axon and the response is from axon response response so now that we have the route handlers we'll be able to create our rods to follow the pattern that we had before it's going to take our app State and we're going to create a router and it's going to be on our PC the API will be added as we have done with the login with the nest so we don't need to specify API and then obviously we have a whiz State because we want to have the model manager as a state so we do the import again and now we are all set to do the routing and we're going to use the same strategy that we use in other places and it's going to be clearer later why but we're going to have an async underscore RPC Handler and it's going to take the CTX the model manager and then the RPC rack and this will return a Json of value so we now RPC Handler right now we're just going to do a pass through we're just going to cut it like that and because we have all the data for maximum everything is fine and then obviously it's an away so we're going to do the await and the in two response for maxim so now in our underscore PC Handler we are going to this structure our RPC request that and we're going to prefix everything with RPC so like this we have clean variable names then we are going to do our debug Trace with our RPC method that now we have and now the goal of this function will be to Route by the method name and to get the result Json that can be sent back so for that we're going to match on our PC method as string and now we can give our Tasker PC method and that will be our create list update and delete and obviously it can be many more as API grows and then we have to do our fallback thank you rest and we're making sure that we are going to return an RPC method unknown if it's not found so this way if the things match we're going to do the job and if it doesn't we'll have the log and we'll be able to debug our issues now we need to create this error on the web error so we're going to add a section for RPC and we're going to add couple of them so RPC method unknown and then the RPC missing params and then the RPC fail Json performance now if we see in our code now we have this yellow line and the reason why is because a compiler does not have enough to infer the type of result so we are going to help it a little bit and that is going to be value press save and everything will be fine now that will become optional over time but we don't want to have this kind of clippy stuff everywhere and clip is great so I'm not complaining so the first one we are going to implement because it's a simplest is going to be the least task and that is because it doesn't have any params then we'll do the credit task and then we'll do a macro rules to be able to do everything in a concise and consistent way now we're going to go down and we're going to have a code block here to go step by step and that is a way that I cut some time when I have chaining I add a to-do like that such as I don't have any compiler and then I have a variable like let R for result and then I'm starting my shame so right now I'm going to have a list task from our task RPC I'm going to give the CTX and the model manager and then we're going to do a weight so now we're going to see what we have so we have a result of a vector of a task that is a web result but what we want is a value so the next step is we're going to map our result into a two value and because you implement silos everything will be fine now we have our base body clippy that is telling us that we have a random closure and clip is always right so we're going to have two value so now we have a result which is a web result of a result of value and that is a steady result so if we put a question mark we are left with the Saturday result of value so we can put another question mark and that would get our value problem is the web doesn't have yet the error from Json so we can add that so we bring our web error and the way that I like it is after my modules I have external modules and I'm going to do a set adjustment and I'm going to do the string right now again that can be debatable but right now at least that is a good way to start and we're going to implement our from very easy from cell address and error to set a Json and then we're going to do two string and then that's it we have that and now we have our double question mark that works perfectly fine and then once I have that I remove double apply code press save it's going to be format and now it's clean and knit so now we have our result Json which is our Json RPC result so we're just going to do our body response which is going to be our Json the RPC ID whatever the client sent and the result which is whatever the results Json was and we can normalize it a little bit more and we'll do that in future episodes where the RPC handlers will return the command structs such as we can have dot data for example for everything that has a data and so on but for now we are going to have the raw data in the result property which is going to be a very good start so we are going to have OK Json and we're going to have our body response we import the Json from Saturday this one and the other one below with the uppercase is from axum now if we go to our main we're going to remove our hello because we are not in the hello business anymore and then we're going to uncomment our rout RPC and then we're going to uncomment our Nest API rods RPC press save it's going to do some reformatting so in this case I'm going to have the module name RPC over there and then everything should be fine and now that we have that we can bring our quick Dev we're going to remove our hello the logo is commented out so that's good is for later we can use it if we want to and then we are going to do a list task going to be a post on the API RPC and the Json will be the ID so we're just going to set one doesn't have to be unique because we're not using it but one is completely okay or you can do neural or whatever it doesn't matter and then we're going to have our method list task and for this one for now we don't have any params then we make sure to execute it so assuming you do a cargo run or you do a cargo watch on your server now you can bring your quick Dev terminal we're going to do a cargo watch on our quick Dev we have the result and obviously it's an empty array because there is nothing but everything has been working pretty nicely so that is already pretty cool so now we're going to do the same thing for create and for create it's going to be a little bit bigger because now we have the params so what we want to call is we want to call create task with rctx and our model manager and with the parents but the params is a fully typed params is not the RPC value balance so we need to do some work just before so for that we're going to have our lead params and that will take our RPC params so in this case we want to make sure that there is a param so we're going to turn it into an error and we are going to do our RPC missing params so that was the error that we created before and then we are going to give our great task to string so now what we have done is we made sure that we have a params which is a Json value but we want to be test for create so what we're going to do is we're going to do a from value of our Json parents and then if it fails we are going to return our error RPC file Json parents and we are going to give the error method later we might want to capture the the reason but right now that is going to be fine so we're going to import the from value and the create task and that's it now we have params which is task for create and the reason why it stands for create is because the create task requires it and so the compiler can trace it back over there so now if we bring our quick Dev and we're going to do a request for create so for example it's going to be our API RPC the ID create task and then in our params we have our data with our title task AA and now assuming you have your cargo watch on the server we're going to rerun our cargo watch example we could have press save and voila the last call was a list and it does list this new task that we created if we scroll back up we're going to see that we have our crate that created in fact if we look at the back end you can see that we have our list task and then before we have our create task and everything was successful okay so that is pretty cool already but now we have an issue all of this bowler plate is kind of heavy to carry around because for update we are going to have to do that and so on and in fact it's even worse because you will probably have more entities so then carrying that for all of the entities is a lot of duplication so there's many ways to solve this issue and the strategies that we are going to choose for now is going to be the declarative macros it's relatively simple very convenient and then after we can get faster if we want but that would go a long way to simplify and to scale our code so we have two kind of apis right now we have the one that have a params and the one that do not have a parent's last list task and this task is a little bit the outlier for now but we want to have a model that is flexible and doesn't require us parents so for that we are going to first do the simple case which is going to be our list task so one thing that I like to do when I know that I'm going to learn in two macro walls territory is first I make the one-off work fine in my normal code which is what we did for this task and create task then once I have my one-off I copy them into my macro rules that is a nice way here to get started because if you start right away from the macro rules it might get a little bit hard to debug sometime so what we're going to do we're going to create our macro rules we're going to call it exec for execute RPC function and the first arm will be the without parents and for that is going to take three Expressions the function which is list task in this case the CTX which is exactly that and then the model manager which is also the model manager and now we can take that paste it there and we're going to replace this task by dollar sign rpcfn and the dollar sign to CTX add the dollar sign to mm and then we are going to take the name of our macro and we're going to replace that with our macro list task CTX mm we press save and everything compiles so right now we didn't save much we exchanged one line for another line it was not really a big deal however for the create that is where we have all of this bullet plate but we don't want to repeat over and over again so for that it's going to be another arm so we're going to take that and we're going to create an arm which is going to be with param we're going to add the fourth expression which is going to be our RPC params and so our goal is to be able to call our RPC function with the params so we're going to take the Border plate code that we had before in the create paste it there now that will be dollar sign RPC params and now for that name which is a great task obviously we cannot obviously we cannot hardcode it there's a nice little trick which is let RPC function name and we're going to use stringify and that we're going to give our dollar sign our pcfn and now that we had that we can put it there and there paste it and everything works nicely so one reason why we are going with the macro rules for this kind of routing is because we have the weight and so you could build some sort of structure with a sync function pointer but it gets kind of heavy and very dynamic in natural and so on so right now everything is statically type and it's actually very concise in our code now later we can refactor that if that doesn't scale anymore but often you can go a very long way with the macro rules okay so next important detail is that we want to select the code block and add another set of braces because we want the macro to generate the braces as well because since that is not a one line the match expect the braces okay so now that we have that we can test it out so we're going to replace this whole code block with with the RPC params we're going to make it create task press save and we have a compiler which is there so now we can replace our update task as well and we import the function for the delete and we import the function and then that's it we have our crud so now we're going to go to our quick Dev and we're going to do a date task with the ID 1000 so for now we are going to hard cut it and then we're going to rename it task BB and now we're also going to do a delete and then we are going to hard code it we are going to delete the second one the 1001 it will fail at the first one work on the second one and then keep failing every time we we run the quick Dev again so now that we have that we're going to activate our delete we're going to bring our terminal for the quick Dev and we're going to do our cargo watch over there and so first we're going to have the task ID assassin with test BB which makes sense because it has been renamed we're pressing save a second time now we do not get task ID 1001 because it has been deleted and then we press a third time and we get our 2002 task AAA which hasn't been renamed so everything is working pretty nicely and now if we scroll back up we're going to see that we get a service error on the client side now if we bring our server side terminal we can see that this error map to the entity enough to find error so everything is working pretty nicely and then one thing that we're going to add right now is to say what if we wanted to carry over this error to the client such as a client knows that the entity was not found so let's do that right now so for that we're going to bring our web error and in our section client error we're going to scroll down we're going to open it and that is the implementation where we are mapping the server error to the client status and the client error so that is where we're going to have to add our map and now first what we're going to do is we're going to add the client error to this list so we're going to add an entity.net found we're going to have the entity and then the ID and then as I said before in this case for my client error I like to have the uppercase snack case so that's why we have the allow no command case types up there but that is obviously a personal preference now that we had that we can go back up in our client status error and we're going to add a section after our auth which is going to be for model and the nice aspect with rust matching is that we can cherry pick what we match so in this case we are just going to care about the model error of the web error and then inside it we are going to match on this entity.font everything else will go to the fallback and now we can return our Tuple status code bad request or client entity not found and we give our entity and then the ID and in this case we need to derf the ID because it's a reference of of number so now that we have that we are creating the right client error now there's one thing that we didn't do in the rest course and that was to sterilize the client error as Json we just had the name of the variant name in the Json so what we're going to add is stylize and we are going to specify that the tag which is a variant name will be message and the content which is the content of each variant will be detail and now we can go to our middleware response map that is where we are doing this transformation at this level we are going to update it actually to be more Json RPC so we're going to get our client error we're going to do a two value we're going to extract the message extract the detail making sure to import to value and then in our error in message we're going to Output the message which is a variant name and in the data we still want to have our uuid so we're going to have Rec underscore user ID and we're going to put that and again in the Json RPC the data is our format is optional and it's our own schema so that is what we're going to decide and As a detail we're going to have the detail which is the content of the any variant so again the message will be the variant name just to be clear so now that we have that we're going to do our cargo watch on the quick Dev so we assume that we have the cargo watch on the server as well and now we have the first one that's BB second one same thing and third one we have the 1002 and now if we go back up to our delete we have our error with our message which is our viant name the data which has the request user ID and the detail which is a selling session of the content of the variant and that's it so now we have all the information on the client side that we want to pass from the server and we are following the Json RPC now actually we are missing one thing of Json RPC and that is the ID so we have the error but we don't have the ID so that is what we want to do and in fact in the request log line we also have the same kind of issue because when we have that on the server we have the HTTP path but we don't really know which method has been called in the request log 9. so that is what you want to solve relatively early for your production application because otherwise it's going to be kind of a black box so for that we're going to create an RPC info tract which will have the ID and the method and the idea we're going to keep it open to optional value and that is the main reason why we had this RPC Handler wrapper over there because now we're going to create the RPC info with the ID and the method that we get from the RPC request making sure that we're cloning them and then we're going to execute our RPC Handler but we need to store the RPC info into the response so in this case we're not going to store it in the request because we don't have it anymore it's been passed it's gone so we need to store it in the response so we're going to still have our underscore RPC Handler so now we're going to get the response into a variable which we're going to return so that is our Axiom response and we're going to make it mute because before returning it we're going to insert an extension which is going to be our RPC info so by doing that now we're going to have the RPC info in the map response such as we can use it for Server login or request log line and for the eventual error that we need to send to the client now we are going to go to our middleware response map we're going to extract our rpcn4 so we make sure to import it and so that would be an option of a reference of RPC info which is what we want and now in our log request function when we're calling log request we're going to pass it as an argument and now we're going to go to the log and in our request log line we are going to add RPC info and it's going to be RPC ID option of string and RPC method option of string so in this case we keep it very simple and very flat in our request log line because it's going to be much implore later to do some analytics or do some queries about it so the more flat your request log line is the better it is so now so we go back to the log request now so we need to make sure that we are getting this one so option of reference of RPC info and now we need to make sure that when we are creating our request log 9 we're having those that is a little bit long because the ID is a value and the method is a string there's a little bit more code in the ID side so now that we have that we have a very clean request log line with the RPC ID and the RPC method is S1 and again as I was saying before everything is relatively flat except the error data but even that could be a string as well and so now we have solved our server side request log what we need to solve is now the client side so making sure that we are sending the ID to the client to respect the Json RPC spec and so here we are just sending the error so we are going to add the ID property and we're doing this ref because we're using it below as well and we are doing the Clone okay so now assuming that we have our cargo watch on the server side we're going to run our example I'm going to press save once press save twice and press save the third time and we get our thousand two which makes sense and now if we go back up we see that we have our error with the RPC ID that we gave and now if we bring the server side we're going to see that in our request log line we have our RPC method so that is very important such as you can query your log files to know what function is and everything and then we have our RPC ID which is on the server side is not as useful because that is an artifact of the client side and that's it this will conclude chapter 9 where we coded our json-based RPC API this will also conclude this episode and congratulations we have coded about 1200 lines of code and a robust foundation for production web app backend or web service make sure to check the awesome map.dev Ross web app for more information until next one happy cutting
Info
Channel: Jeremy Chone
Views: 67,945
Rating: undefined out of 5
Keywords: Rust Programming, Rust Web Development, Rust Axum
Id: 3cA_mk4vdWY
Channel Id: undefined
Length: 233min 1sec (13981 seconds)
Published: Fri Aug 25 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.