At the end of this video, you will be able to
create your very own backend CRUD application with Rust. We will be using the Axum framework
and Postgres for our database. The repository for the source code of this project will
be linked in the pinned comment below. In your terminal, create a new rust project by
typing “cargo new” followed by your project name. Next, cd into our project directory and open with
your favorite editor. [Use vim and switch to code] From the command line or pgadmin, create a Postgres user and database
for this project. I will name both the user and database axum_postgres and the
password of the user will also be the same. Now, log into your newly created
database and create a tasks table, which will contain a task_id
column, name, and priority. If you use Linux or macOS and you have trouble
setting or logging into your Postgres database, be sure to confirm that in your pg_hba.conf
file, you have your authentication set to md5 for all users, but you can leave
it at peer for your postgres user. By default, rustfmt uses 4
spaces for a tab indentation, to switch to 2 spaces we create a rustfmt.toml
file and specify the tab spaces to be 2. The cargo.toml file in your
project directory contains the list of dependencies we’ll make use of in
your project. To run our backend service, we will need two main dependencies
Axum and Tokio for the runtime. Sqlx is a crate used for validating and running
our SQL queries. Since we are using SQLX for the Postgres database we add Postgres in the
features array, we will use tokio as our runtime, and a native TLSl. We want our SQL queries to
be checked at compile time. The macros that run this don’t come by default, which makes us
add macros in the features array to enable this. An important crate to use is Serde
and Serde_json, which helps in serializing and deserializing rust structs.
Whenever we wish to send and receive JSON, this would require the derive feature from serde.
So we specify by adding it in the features array. Our last dependency is dotenvy. This helps in
loading variables in the .env file which we will create in the project’s root, to
the program's environment’s variable. The .env file we created, will contain
two variables. Our database_url which is the postgresql connection string of
our database and server_address which is the host and port we wish to run the server on. The database connection string is formed using
the following. First, we write Postgres followed by a colon and double slash, next, we input
the user, a colon, and then the password, followed by an at sign and the address of our
Postgres db, a slash, and the database name. We can now get into writing
our rust codes from here. In the main.rs file, first, we bring in
the structs, traits, functions, and macros which we will make use of into our program's
scope. So with the “use” keyword, from axum, we get the path and state struct from the
extract module, the status code enum from HTTP, the get and patch function from the routing
module, and lastly the JSON, and router struct. From serde, we will get the serialize and
the deserialize trait, and from serde JSON, we expose the JSON macro, which will be
used to create the JSON we send to a user. In SQLX, we expose the pg pool and pg pool
options which are type aliases for pool and pool options structs specialized for Postgres and
lastly, we get the TCP listener struct from tokio. Every entry point for a rust binary is the
main function. Here our main function will be async and with an attribute
macro called main from tokio. The six major steps we will take to create
our service is to first expose the environment variable from the .env file. Create program
variables from the environment variables. Create a database pool. Create a TCP listener.
Compose the routes and serve the application. To expose our environment variables from the .env
file into the program, we achieve this by calling the dotenvy function from the dotenvy crate.
This returns a result enum which errors if it can’t access the .env file. When that happens, we
want to stop the application and print the error message. We do that by calling the expect method
on the result enum. The environment variables can now be accessed by the var function from std, ENV,
when we pass in the variable name in the argument. We can now create a server address and
database URL variable in our program, and their values will be gotten from
their respective environment variables. If there is no server address we assign a
fallback value. But as for the database URL, if it doesn’t exist, we stop the
program with an error message. To create our database pool, initialize a pool
options struct, specify the number of maximum connections, and connect to the database. The
connection is asynchronous and returns a result enum, so we need to await it and if it errors, we
stop the application and print an error message. Next, we create our TCP listener simply by calling
the bind method from the TCP listener with the server address, await it, and if it errors we
close and print the error message. If successful, we print a message in the console with
the address our API is running on. The routes for our rest API
can now be composed. First, create a router struct and add a route for
the home path. For start, we accept a get method and return a hello world string when
we receive a get request from the home route. We assign the route struct to a variable, then
go ahead and serve the application and test if the setup is successful. We call the serve
function from axum and pass in the TCP listener and route struct, await it and if there is an
error, we close the application with a message. When we test the API on Postman, we get a hello
world response and our setup was successful. Now we create the routes for
getting, creating, updating, and deleting tasks. We get and create tasks
from the “/tasks” route, but to update and delete we need the task ID of that specific
task so we use we add a task_id parameter. Each of the handlers needs
access to the Postgres pool, hence we add the db_pool in the route’s state
and each handler will be able to access it. The get, create, update, and delete task handlers
will all be asynchronous, get the db_pool from the state, and return a result enum whose ok and
error value is a status code and string tuple. For the get_task handler, we will need to
fetch tasks from the Postgres task table. We will do this using SQLX, but first, we will
specify the type of the rows fetched from our query will look like. So outside the get_task
function, we create a struct called TaskRow. The TaskRow struct will have the task_id as i32
and name as string, and since the priority column is an integer but can be null, it should be an
optional i32. The TaskRow struct also needs to be serialized to JSON before sending as a response,
therefore we need the serialize trait from serde. To get the rows from Postgres, we
call the query_as macro from SQLX, pass in the struct the rows will be
cast to, followed by our SQL query. We select all rows from the task
table and order them by their IDs. We then call the fetch_all method and
pass in the database pool in its argument, await the result, and if it errors, we
map the error to a tuple that contains a status code enum with the internal server
error element, and a string of a JSON value, whose fields are a success with a false value,
and a message whose value is the error string. We return the mapped error
early, but if there is no error, we assign the task rows to a variable named rows. Lastly, we return an ok value that
contains a tuple that contains an ok status code together with a JSON string
that contains a truthy success field, and a data field that contains the task rows. To create a task, we will need to receive
a JSON from the request. This will require us to create a struct in which the JSON
from the request body would be extracted. Our CreateTaskReq struct will have a name
field which is a string and the priority will be an optional 32-bit integer.
The JSON from the request would be deserialized into this struct so we derive the
deserialize trait from serde to enable this. So in the create task handler, we
extract the JSON from the request body, by adding this in the function parameters. Since
the JSON is extracted from the request body, it needs to be the last parameter as
any extractor that consumes the body should be the last parameter in an axum
handler. You can read more about this in its docs. The link can be found in
the pinned comment and the description. When we create a row in the task
table, we will need to return the task_id of the created task and add it to
the response. So just as in the get task, we create a struct that describes
the row we return from the SQL query. Since we will need to return
just the task_id from the row, the CreateTaskRow struct will only contain
a task_id field of type i32. This will also derive the serialize trait from serde as it will
be serialized to JSON and added in our response. In the create task function, we then
call the query_as macro from SQLX, pass in the CreateTaskRow struct, and the SQL
query for inserting a task. In our SQL query, we add the name and priority of the task
and return the task_id of the task created. Looking at the query, we can see a $1 and $2,
this is where the task name and priority we pass in the next arguments are added when
SQLX prepares this query. The “task dot name” and “task dot priority” are gotten from the
task extracted from the JSON in the request body. Since we only return one row from the query, we call the fetch_one method and pass
in the database pool in its argument, await the result and if it errors, just as in
the get task we map this to a tuple that contains a status code enum with the internal server
error element, and a string of a JSON value, whose fields are a success with a false value,
and a message whose value is the error string. We return the mapped error early, but if there is no error, we assign the
returned row to a variable named row. Lastly, we return an ok value that contains
a tuple that contains a created status code together with a JSON string that
contains a truthy success field, and a data field that contains the created row. Recall that in the update and delete task route
we specified a task_id parameter in its path. We will need to extract this path in the request.
So we make use of the path extractor from axum, and since the task_ids are integers we use
an i32 as the path type in the extractor. For updating a task, we will need to send a
JSON in the request that contains the fields we want to update. Therefore, we create a struct
that describes the JSON type. This will contain a name field which is an optional string and
a priority field which is an optional 32-bit integer. They are both optional, since we may
need to update only one of the given fields, and the other needs not to be specified.
Just as in the CreateTaskReq struct, we derive the serialize trait for the struct,
and we can now extract from the request. So, in the last parameter
of the update task handler, we extract the JSON as the UpdateTaskReq struct. For updating a task, we don’t need
to return any row from Postgres, so we can just use the query macro from
SQLX, and pass in the SQL query, we will use for updating the task. In the update query,
we can see we have some Postgres parameters $2, $3, and $1 for the name, priority, and task_id
values. Therefore, in the next arguments, we pass in the task_id from the path first, followed
by the task name and priority from the JSON. Since we aren’t returning any row, we call
the execute method and pass in the database pool in its argument, await the result whenever it
errors we map the error as usual and return early. If our query, is successful, we return
an ok value with the usual tuple, but in the JSON string, we
return just the success field. The update query, we just used isn’t the best
for some reasons which we shall see later on, but let’s move over to the
delete task handler for now. Just as in updating tasks, when deleting tasks, we don’t need to return any data from the
query. Therefore, we call the query macro from SQLX and pass in our delete query. In
our delete query, we only used one parameter, which is the task_id, so we pass in the
task_id from the path, in the next argument. We then follow the same procedure from the update
task handler, which is to call the execute method, await the result, map the error,
and return early if it errors. If this is successful, we return
an ok value with the usual tuple, and the JSON string will contain just
the success field with a truthy value. Now to test our API, on Postman,
we send three post requests to create tasks and they are all successful.
We then test the get request and we can see all three tasks. We delete one with the API,
and when we test we have just two tasks left. Let’s try to update the task with task_id 1.
We update the name to study and priority to 4, and when we fetch the tasks,
we can see it’s updated. If we try to update just the name
field of the task, it is successful, and when we fetch the task we see that the
priority becomes null, since SQLX takes a None option enum as null in Postgres. And
when we try to update just the priority, we get this error, since the name column
cannot be null according to our specification. To resolve this. We create
our SQL query dynamically at runtime to update only the fields
we specify in the JSON request body. So if we want to update just the name, our
SQL query will look like this. For priority, it will look like this. And for
both, it will look like this. While examining these three queries
we can see that our dynamic query will begin with “update task set” and
ends with where “tasks_id = $1”. A challenge we will face is the
commas in between the fields we need to update. If there are no
fields specified in the JSON, our query has no field to update,
which will throw an error when we run. To solve this, we update the task_id field
to itself, making the query always contain a field and we can add a comma and
the other fields we wish to update. The code to build our query in this form will look like this. Next, we dynamically add
the parameters of the Postgres query. From SQLX we call the query function and
not the macro this time, since compile time checking won’t be suitable in this
situation. To add the task_id parameter we call the bind method and add in its
argument. We assign this to a mutable variable ‘s’. And if the task name is some
value, we add this parameter by calling the bind method on ‘s’ and assigning it to
itself. We do the same for the priority. Now, we can call the execute method on ‘s’,
while passing the database pool in its argument, await the result map the error value
as usual, and return early if it does. We can return an ok value with our
usual tuple if the query is successful. We can now update just a single field in a task record without affecting other
rows and we are done for this video. If you enjoyed this video, please
support me by giving me a like and subscribe for more videos.
If you have some questions, you can let me know in the comments, and
feel free to suggest a video you need. Thanks for watching till
the end and have a nice day.