Hey everyone, welcome to this video where I'll be sharing with you how to test a REST API endpoint in Python using the PyTest framework. REST APIs are the backbone of any internet-connected service that we have today. If you don't believe me, then check out how many API calls your browser has already made by just clicking to watch this video. It's probably dozens. And if you're here watching this, then I'm guessing you already know what a REST API is. You might even have one ready that you want to learn how to test. If that's the case, then that's perfect. Otherwise, for those of you who don't know what a REST API is, then it's basically an HTTP endpoint that, when you make a request to it, it does something and it sends a response back. And if that sounds really simple, then don't be fooled just yet. Because whether it's buying a book off Amazon, posting a photo to Instagram, or searching for something on Google, REST APIs are running the world. Which is why any tech company or team that wants to grow in this environment is going to need a scalable and automatic way to test them. If you have a hundred different API endpoints, you can't just sit there clicking them manually each time. It's going to sap away at your productivity and your general will to live. So in this video, I'm going to show you how to write an automated test suite in Python using the PyTest framework. And I'm also going to provide you with the API endpoint that we'll use today. It's just going to be a simple to-do list application where you can create, update, and delete task items for a user. And although the API endpoint is actually implemented in Python itself, what you'll learn here today can still be applied with pretty much any other API backend. Java, Node.js, Golang, or anything at all. That's because the communication that is happening between the test suite and the API backend is via the HTTP protocol. By the end of this video, you'll have a test suite that you can run with a single Python command, and it's going to test every single API endpoint that we just talked about using a bunch of different user IDs each time. And that's going to ensure that the system is fully functional. It's also going to test that the system fails properly when we try to break it. For example, retrieving an item that doesn't exist in the database. To follow along with this tutorial, you're going to need Python. I'm using version 3.8, but any version above 3.6 should be fine. You also need to install pytest and the request library. Normally, it should just work if you run pip install pytest and pip install request. But if that doesn't work, then please check the links in the comments below for how to debug them. Also, we'll need an API endpoint to test. I've prepared this endpoint for you at todo.Pixegami.io. And this is an API I've created and I'm hosting for you to use for the purposes of this video. Again, you can find a text link for that below in the video description. If you click on it just visiting that URL, you should see this "Hello World" message. But if that doesn't work, then leave me a message or email me directly so I can fix it. Sometimes I forget to pay the server bills. Once you have all of that, then we're ready to start coding. But before we code, let's first go over a testing plan really quickly. So here's the service we're going to test. It is a todo list application and it has API endpoints for creating an item, updating an item, getting an item, and deleting an item. Every item must belong to a user. So the API endpoint has a way to list all the items for a particular user as well. And I want to make sure that we test all those cases and that they work. So that already gives me four different test cases. Now you might look at this and think, "Wait a minute, I can do all of that in a single test." And yes, technically you can, but you shouldn't. Because as the app gets bigger, it will be harder to isolate and test individual functionality. So we're going to keep them as four separate tests. Okay, so now you're thinking, "Well, if I run these four tests in a particular order, then I only have to create the item once and have it reused across all the tests." And yes, it's possible. But again, you shouldn't do that. That's because you want all of your tests to be independent of each other. So you can run individual test cases on their own without having to worry about how they would interact. That's important for debugging, but also for many other reasons. To make that work, we'll actually end up having to make multiple API calls for each test. For instance, we'll have to create a new item in every single test. We'll also have to get the item back from the server to check if it's been created or updated properly. So our testing plan is going to look something like this. So now that we have our testing plan, let's go ahead and get started with the code. So I'm in a new folder in Visual Studio Code. We'll start by creating our Python test file. And the name can be anything you want, but it has to start with test_ or end_test. So I'm going to call it test_todo_api. And then let's create a constant for the endpoint that we're going to test. And the endpoint I've prepared for you is https/todo.Pixegami.io. So you can type that in, or you can find the link in the description below if you just want to copy it directly. And just to test that the endpoint works, you can control click this HTTP URL. And then if you open it in your web browser, you should see it return a Hello World response from the API like this. And also, if you just go to the address bar and type /docs to the URL, it'll take you to the FAS API docs that show you all of the endpoints available with this API and what they do. So we have the root endpoint, which we just use and returns us the Hello World message. And we've also got create_task, get_task, list_task, update_task, and delete_task. Now back in Python, let's actually try to call this endpoint from Python itself. So to do that, let's first import our request library. And the request library will let us interact with HTTP directly from Python. So if you want to learn more, just type in request Python into Google and then look for this request.readtodocs.io. And then we can call the endpoint by doing request.get, and then the endpoint URL. And I think that's pretty much all we need. And if we call this, it will return a response object. So let's write that here. So now I can print the response and see what happens. I'm going to open up my integrated terminal and try running this. So to run it, just go to your folder where you have this file and then type Python and then test_todo_api.py. And that should run successfully, and you should get a response object 200 like this. But what if I want to see the Hello World message that I see in the browser when I visit it? So the HTTP response from this actually contains quite a lot of information, including things like headers, status code, and the message that we saw earlier is something that we'll find in the response body, which is usually some kind of JSON data. So to get that, we can do response.json, and it's a function, so you have to type that. So we'll do data equals response.json, and then we'll print out the data to see what's there. Also, the status code on its own is pretty useful for testing. So let's turn that into a variable as well. So now I have data equals response.json, and then I'm printing the data, and I have status code equals response.status code, and I'm printing that too. Let's run this again, and now we can actually see the JSON data with the Hello World message that we see in the browser. So that's really useful, and we also have the status code as an integer, so we can use that to verify whether the API is working or not. Now let's clear that and actually write our first test. So to write a test for pytest, all we have to do is write a function that starts with the word test and then underscore. And the first test I'm going to write is that we can call the endpoint and check that it's a successful call. I'm not even going to check the data return in the call. I'm just going to check that the call itself was successful. And I know this wasn't originally covered in our testing strategy, but that's fine. I'm just writing this to show you how it works before we get to the good stuff. We'll do the same thing here. We'll do response equals request.getEndpoint, and we're going to assert that the response.status code is equal to 200. Now assertions are something that if this statement evaluates to true, this will pass, but if it evaluates to false, this thing will throw an exception, so this will fail, and therefore the test will fail. So by doing this, we're saying that only pass this test if the status code is 200. Now, if you're not familiar with HTTP status codes, I'm just going to recap for you really quickly what they mean. HTTP response status codes are kind of like a universal standard for an integer that communicates the status of an HTTP request. So we'll usually get this back in our response, and they're broken into five main categories. Informational responses begin with a one. I rarely see these get used, so we'll just ignore that for now. Successful responses begin with a two, and the most common one is 200, which means that the response was successful, which is why in this example, we check if the status code is equal to 200. But there's a couple of other ones as well. If the URL should redirect somewhere, it will begin with a three, and then if it's a client error, meaning that the caller of the API did something wrong, like they provided some invalid input, or they don't have access to the API, that begins with a four. And 500 errors and up are usually something that is wrong with the server, so that means that the system is down or the service is down, and it's not the fault of the caller. And within these ranges, there's also a lot of sub-responses. For example, if we look at client error responses, we've got 400 for bad request, 401 for unauthorized, or 403 for forbidden, or 404 for not found. So there's a lot of sub-categories of the status code as well, but the most important number is the first number, which tells you what general category of status is being returned. If you want to read more about this, just type in "http status codes" into Google, and then click on any of the first links you see. This one is a guide from developer.mozilla.org. So now we know a little bit about HTTP status code. This line, asserting that the status code is 200, means that this test will only pass if the call to the endpoint was successful. So this is kind of like a good sanity check to make sure that the endpoint itself is working as a whole before we start doing more granular checks with the rest of the tests. And I can delete all this stuff at the top as well, because we're not going to need it anymore. So this is my very first test case. How are we going to run this? Well, we'll go back to our terminal, but instead of running the file with Python, we're instead going to type "py test." And we don't even need to pass in the name of the file, because when we run pytest, it actually looks through the directory and runs all the files that have a naming convention that pytest recognizes. And by default, that includes all files that have this test_ and then, you know, anything.py. So that's why it's important that we name the file test_something. So if you go ahead and run that, well, for some of you it might work, but for others of you or like me on a Windows machine, it might not actually recognize this name. So even though I've installed it, it's naturally not created a symlink shortcut to the name. If you encounter this error, you can instead try running "python -m" and then "py test," and that should do the same thing. So I'll run that instead, and that works. So we can see here that the one test has passed. But if you also want to see the names of the tests and you want to see more details in general, you can actually run this with a V flag for verbose and it'll print out more information. So let's give that a try. We'll type "python -m pytest" and then " -v. " And now when that runs again, you can actually see that each test case is mentioned in the output here, and it has passed. Let's now proceed with the test plan. So the first thing we were going to test is that we can create a task. So I'm going to use this put request right here to create the task. And we might actually also need to get the task that we've created. So I might need this get task request as well. But let's start with this one first. So the endpoint here is /create-task, and here is the schema that we need in the request body. And go back to my Python file. I'm going to create a new test case, and here I'm going to call it "test can create task." And here it's going to be similar to this line, except that we're going to use a put request because this API is using the put method, and we're going to need to pass in this response body. So let's implement that. response = request.put So here I have request.put because we're using the put method, and then I've got the endpoint, but I'm adding the create task endpoint to it because that's the path we need to call this endpoint. And finally, we're passing in the payload, which is the request body that this API needs as a JSON object. And now we can copy the schema of this payload and just paste it in here. So let's see, we've got four keys. We've got the content, and then we have a user ID, a task ID, and whether it's done or not. So let's actually fill this in with some fake content. And now similar to the case above, we'll assert that the status response code is 200, and we'll also print the data from the response just to see what the response looks like. So let's go ahead and run that again. Now, when I ran it this time, it still passed, but I don't see any of my print data. That's because by default, PyTest doesn't actually print anything when the test cases are passing. They only print it when it fails. To get PyTest to print something, we actually have to add another flag to our command, and that's a -s. So we'll just add that there. And now you can see that the test is actually printing something out that we wanted. So here is the response object that we have. It's an object with a key called task, and in there, we've got the user ID, we've got the content, and that kind of matches what we passed in, test user and content here. We've got the isDone, which is false. We've got the createTime, but we actually have a different task ID from what we passed in. That's because the task ID is actually generated server-side. So in the create API, we actually don't need to add this one at all. I think that the payload for the task ID is just for when we decide to update a task, and so it needs a way to identify it. But when we're creating a new task, the server will create one for this, so we can probably get away without passing a task ID here. And after I made that change, I'm just running it again to make sure that it's still passing, and it does. So that's good. Now, even though I've tested this API endpoint that I've created a task, and I've tested that the response is successful, and it returns a 200, it doesn't actually give me full assurance that the system, like the whole API system, is working. That's because I don't know whether or not I can believe the response that I'm getting. So the only way for me to know for sure is that after I create a task, I can call an API to get the task using that task ID as well and make sure that the content from this task is actually the same one that I put up there in the first place. So how do we do that? Well, we're gonna need another API call, and we're gonna need the task ID of the new task we just created. So first, let's get the task ID. From the output we saw earlier, we know that the task ID is under the task object in the response payload and then task ID. If you're not sure, then just run this again and look at the data that's printed out, and then you can confirm it. And once you have the task ID, we're actually gonna need to call the get endpoint. So let's do that here, except that this time we'll do get task. And if you look at the documentation, we actually also need to add the task ID to the path. So let's just change our path and add that into the string. So I'll make it a format string, and then I'll add that task ID as part of it. And since it's a get request, we'll also change that to a request.get, and we won't need any payload. Now, this also returns a response, but here, I'm reusing this variable name response for the first time I'm using it for create task, and then I'm using it for get task. So that's not a really good practice because you can get really confused if you sort of reassign and reuse variables. I recommend that you only name and assign a variable once, and if you want to do something else with it, then just create a new variable. So we're gonna do that here. I'm gonna change this one instead to create task response, just so we're really clear. And then I'm gonna change this one similarly to get task response. And I'm gonna do the same thing. So I'm gonna assert that this return code is 200. And I'm also gonna access the data from this response to make sure that the content and the user ID is the same that I supplied to it. Now, what does the get task response data actually look like? Well, if you're not sure, then run it and print it out. Okay, so this last object is the get task response, and it's basically just an object that is the task itself. It's got the is done, content, user ID, and task ID at the root of the JSON object. So if I wanna get the content and the user ID, I could just use those keys directly. So let's go ahead and do that. So here I'm checking that the task data content is equal to my payload content, and it's at the user ID is also equal. Now, if I wanna sanity check and make sure that my test is working correctly, I can set this to some other value. It's always good to run your tests and make them fail as well, just to make sure that if something's really do change the way you don't want them to, that the test can detect that. So here I'm gonna actually make it fail intentionally and see if it does. And when I run it again, it actually tells me that the test failed and it shows me why. It says there's an assertion error and that my test content is not equal to some other content, which I wrote here. So let's fix that, change it back to what we had before, and then run it again. Okay, so this is now both my tests are passing. And I'm gonna get rid of the print data 'cause we don't need that anymore and just organize the code a little bit like this. So there my first test case is done. Let's move on to the next one where we can test if we can update an item. So in this test, we need to do three things. We need to create a task like we did in the first case. We need to update it with something different and then we need to get and validate that the updates actually went through. So we need to call it three times. Now, some of the code might start to get reused here. So let's write some helper functions for that first before we actually write this one. I'm just gonna create a stub for this test here. And I'm gonna turn this request.put and then this request. get into some helper functions that we can just pass in the task ID or the payload directly. And it will do all this other stuff for us. So my first helper function is a create task and it just calls this request.put and it'll pass in the payload directly. So we don't have to keep writing this whole thing out. And I'm just gonna do the same thing for get task over here. So I could just call this directly as well. So I'm gonna refactor this one at the top first and that's just gonna become this here. And then this one similarly is just gonna become create task payload. Now I might also want to refactor the creation of the payload here since I'm gonna be needing a payload quite often. So let's turn this into a helper function as well. I could just return this directly. Okay, so now our first one looks a little bit cleaner, a little bit easier to read, and we can reuse all of that logic for the second one here as well. So let's go ahead and do that. We'll create a new task payload. We'll create a task. We'll get the task ID like we did last time. And now we're actually gonna update the task. So we're gonna create a new payload except that we're gonna change some of the fields. So we're gonna do content and we're also gonna change the isDone to true. So we're changing two fields in the new payload and we don't have a helper function to update the task yet. So let's create that too. It's very similar to create task. We just call it update task and then the endpoint will be changed to update task. It's also a put request and it also takes a payload. Now let's go back to the API documentation and we can see, let's inspect the endpoint here just to make sure that we're doing the right thing. So the expected request body is exactly the same. I'm guessing that these two have the exact same input schema which makes sense. And it's a put request and its name is update task. So that's exactly what we have in our code. So this is the new payload that we wanna put in but we also still need to pass the user ID and the task ID. So the user ID, we can just use it from the first payload. And if you're wondering why I'm doing this even though this user ID is fixed to test user, you're gonna see later. But for now, this is the safer way to do it to make sure that it has the exact same user ID as the payload. And the task ID is gonna be the same one that I get from my create task response here because if you remember, this is generated by the server each time. So we actually need to get this from the response. So now I'll call update task with this new payload. And the first thing I'm gonna do is just assert that it was a successful call. So their status code is 200. In fact, I should probably also assert check the get task response. And in the final part of the task, I'm gonna get this task again and then check that the content is no longer the original content in the first payload, but this updated content. And that is done is set to true. So that's what my test looks like when it's done. Let's go ahead and run it now and make sure it passes. Looking good. We have three passing tests, including create and the update endpoints. The next thing we want to test is if we can list the task for a certain user ID. So let's go ahead and implement this as well. This is also a get request. And the endpoint is /list-task and then /userid. And we have all that information, so we can go ahead and call this. Oops, as I'm editing this, I just noticed that I made a small mistake in this section. This function is supposed to be called testCanListTask and not testCanListUsers, sorry. Now, my strategy for this test case is I want to create a number of tasks, say for example, three tasks for a user. And then I want to list tasks and make sure that I get three items back. And I'm not going to check the contents of the task because in our first test where we create the task and check the content, we already know that it's getting the right content. So I'm just going to check that the number of tasks I get from listing them is the same as the amount of tasks I've created for that user. So here I'm looping over this three times. I'm creating a new set of payload and I'm creating a new task. And then I'm just asserting the response code is 200. So by this point, when this is done, I should have three tasks created for this user. Now I don't have my list task helper function yet. So let's create that first. It's most similar to getTask. So I'm just going to copy this getTask one and then I'm just going to change it. But instead of taking task ID, it takes user ID. And the end point is list task. It's also a get method. So that doesn't have to change. So now I'll call the list task items and I'll assert that the status is 200. And I'll also get the data from that. But I don't know what the data looks like yet. So let's print it out first. And now we run the test suite again, we can see the output. Okay, so a bunch of tasks were printed out here. This is the list task response. It's pretty long. But the key thing I want to look at is that the root node has this task key there and it's a big list of tasks. So I'm guessing this is my list of tasks. So let's get this key to get our list of tasks from the response. And now I can assert that the length of the task equals to three. In fact, let's put a helper variable to store that in case we decide to change it. So we only have to change it in one place. Okay, so I have that. I'm gonna keep the print statement here, but I'm gonna delete this pass statement and save the file. And let's run that test again. But this time, instead of running the whole test suite, like using this function here, I'm starting to have a lot of tests. So I don't want to run the whole suite every time. It's creating a lot of API requests responses and it takes longer. So I just want to run that one test function. And I can do that by typing the name of the file, like this test todoapi.py, and then do colon colon, and then the name of the function. So test can list users. And now if I run that, the test should only run that one test case. And it did, it's only run one test, but the test has failed. And the reason for the failure is that 10 isn't equal to three. So we're actually getting 10 tasks back from this API. Can you guess why? Well, it's because we've been running the test all day throughout this tutorial, and we've been using the same test user. So if we've run the test a couple of times already from, for example, get tasks and create tasks and can update tasks, then this user actually has far more than just three tasks that we are creating in this case. Now I've set up the API in a way that all the tasks that we create eventually get deleted after 24 hours. So this would work eventually if we ran it in isolation and ran it first. But that's not good practice because we want all our tests to be able to run in isolation at any time. Also, we've actually probably created far more than 10, but the reason we got 10 from this API is that there is a server-side limitation to return a maximum of 10 tasks, which is why it's 10 here instead of some greater number. So to fix this, it's pretty simple. We just have to make sure that every time we create a task, it's using a completely new user ID. Now, how do we do that? A really simple way is to generate a random number and then we can append it to our user. So we'll have like user 9460 or 1258 or something like that every time. But the problem is that if a lot of people are using this or if we're testing a lot of these things, numbers don't really give us enough range of randomization. Some users might actually accidentally get the two same random numbers. So we need to add strings in as well and just generate a whole random string. And a really good way to do that in Python is a library called UUID. And now this is actually default in Python, so you can just import UUID. And it stands for universally unique identifier. Here are some examples of what UUID looks like. They're really long strings that contain both letters and numbers. And the amount of available UUIDs is so large that when you generate two IDs randomly, there's almost no chance that they are gonna collide. So we can just keep running this test. And if we generate a new UUID each time, we're always gonna have different users practically. So to use it, we'll import UUID, and then we'll go down here. And to use it, we can call UUID.UUID4. And it's a function, so we'll call that. And there's actually five different versions, but I think four is the best if you just want an opaque random UUID. And this becomes a UUID object, which isn't a string and we want it to be a string. So we'll just call .hex, and that will return a string hex representation of that ID. So now I'm gonna use that as part of my user ID. I'm gonna put it in a format string where I prefix it with test user underscore and then my long UUID. So let's see what that looks like. Now, while we're at it, that might be something I wanna do for my content string as well, just because that if I have the same string for all my test cases, that could probably lead to some unexpected problems. So here I'm doing the same thing, just a test content with a UUID appended at the end. So now every time I call this new task payload, the test cases are actually unique. And remember earlier when we were writing this update task function, the reason why I reassigned the user ID instead of hard coding it is because, yes, we eventually want to create new random IDs each time we generate a set of new random IDs. So we're gonna use the same function every time we generate a set of payload. And so I wanna make sure that it's using that one we generated before, rather than generating a new one on its own. And just to make sure that's working, I'm gonna print out my new generated user ID and content as well. So let's go ahead and run that again and see what happens. Well, the test still failed and we're still getting 10 tasks returned. Can you guess why? It's because in our test, we actually haven't updated the test user yet and we're not gonna use the new user ID that we're creating here. So I wanna get the new user I generated from the payload, but I have a problem here because each time in this loop, I'm generating a new payload, which might be good if I wanna test that I can generate different tasks content, but in this case, that's not what I'm really testing. So I'm actually gonna take this payload out of the loop. So I only generate the payload once and I'm gonna reuse that payload. So I will create three tasks with the exact same user ID and the exact same content. The task ID will be different though, because if you remember again, the server generates the task ID each time. But now that I have the payload created once, I can get the user ID that I created as part of the payload. And then I can list the task for that new user. And if I run the test again this time, it should pass. And the test is passing as expected. And if you look at our logs here, you can see this test user and the content being created with this very long UUID. So they're always unique. Okay, now we can move on to the next one, but first I'm just gonna clean up my print statements. We're finally onto our last test case, which is to test if we can delete a task. So for that one, we'll first have to create it and then we'll delete it. And then when we get it, it should return an error saying that the task doesn't exist. So let's go ahead and implement that. So I'll create my test case here and I'm calling it test can delete task. And I'll also have to add the helper function just like all the other ones. And this one, the request is a delete method and the end point is delete dash task. And it also uses the task ID in the path to delete the task. By the way, if you've noticed, when we run pytest on this file, none of these other functions get called. It's only the functions that start with the word test that are actually run as test cases. That's because pytest will only consider the names of the functions that begin with test to be a test case, which is why the naming convention is pretty important when you're writing a pytest file. Okay, so we have to first create a task and then we'll need to delete the task. And then we need to get the task and check that it's not found. So let's go ahead with this. So for creating a task, I'm just gonna do the same thing that I've done earlier in the other test cases, create a new payload, call create task, check that the response is 200 and then extract the task ID from the response. And then I'm gonna call it delete task and I'm gonna check the response of that is also 200. Now, what do we do for the last part? Well, we have a function to get the task, so let's do that. But how do we check that it's not found? If you guess that we should use the status code, then you're right. But how do we know what status code to use? Well, we can check the documentation or check the universal standard. You might know already, or you can do it right now and try to find out before I tell you. But for this case, let's just print out the response itself first and see what status code it is. And then we'll look up that status code to see if it matches our expectations. So here, I'm just gonna print get task response dot status code and see running this as is what it actually gives us. So I'll open my terminal again and then I'm gonna run this test case. So I'm gonna run the same command except that the function is now gonna be test can delete task. And the status code that is returned from getting the task at the end is a 404. Sure enough, if you look up what the 404 status code is, it stands for not found, meaning that the object we tried to look for isn't there. And anything that begins with a four is a client error. So it's the fault of the client or the one calling the API that they've provided bad input. In this case, our bad input is that we provided a task ID that doesn't exist in the database of the server. So this looks like it's the right thing. So we're gonna assert that. So here I'm asserting that if I try to get a task that has been deleted, the server is gonna return a 404 saying that the task no longer exists. And if I run the same test case again, it should pass. Now I'm just gonna go back to the top and delete this first test that we have that just calls the endpoint 'cause I don't think we need it anymore 'cause we got way more detailed tests down here. And let's review the test cases we've got. We test that we can create a task, that we can update a task, that we can list users and that we can delete a task. And we're testing them by asserting the status code of each time we call this request. We're also checking the contents of the response. And every time we run these tests, we generate a new set of payloads. So we're using UUIDs to generate a new user and new content every time so that we don't get collisions or duplicates when we're running the test repeatedly or if we want to run them concurrently. And now as a final sanity check, let's open the integrated terminal and run our entire test suite again. So that's python -m pytest and then -v for verbose because we wanna see each test case as being run. And if everything works, then you should see the same thing as I'm seeing, which is that all four test cases, create, update, list and delete tasks are passing successfully. Congratulations, if you've made it this far in the video, you now know how to test a REST API endpoint using Python and the Pytest framework. If you've enjoyed this video and you want to learn how to build a highly scalable API backend in Python, exactly just like the one that we were using in our test today, then I have a tutorial for that as well. Click on this link to check it out. Otherwise, I hope you found this useful and thank you for watching.