Automated Testing in Python with pytest, tox, and GitHub Actions

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome i'm james murphy in this video we're going to be talking about how to automate testing for your python project the goal of this video is to show you what you would have to do to take your project that might be something simple like this just in a github repo somewhere and you have maybe a single file with some code in it and maybe you have some tests already maybe you don't but they're not automatically running every time you commit so we're going to take a repo like this that's very bare bones and turn it into something more like this this repo has a specific structure to it and as you can see i've got a little badge down here telling me that my tests are passing and it's not just a badge it actually runs the tests so i can click on this check mark here and we can see the details of the last test that ran and you can see that the tests have run across multiple different operating systems i have ubuntu and windows and across multiple different versions of python for each one of these tests we essentially have an isolated environment that checks out all of our code and runs all of our tests in a way that would be similar to what someone would see if they were to use your project in a fresh virtual environment moreover all of these tests run automatically every time i push a commit to my repository so here's the overview of what we're going to go over first we're going to go over how to either set up or restructure your project in order to make it easily testable including all of these different configuration files what do they all mean and what are they used for second we're going to learn how to use pi tests for running your tests mypi for checking type hints and flake 8 for linting or checking code style third we're going to learn about talks and how we can use talks in order to run all of your tests in multiple different isolated python environments finally we're going to learn how to use github actions in order to run talks and run all of our tests every time that we push to our code repository now here are a few things that i'm not going to talk about first off testing is a huge topic and it's under the umbrella of continuous integration and continuous delivery or continuous deployment these are very very broad topics and i'm not going to cover them in general i just want to focus on automated testing i'm not going to be talking about publishing your package to pi pi or automatically building documentation automatically formatting your code on git commits or any other pre-commit hooks i'm also not going to talk about how to generate downloadable artifacts like coverage reports and i'm not going to talk about more complicated builds like if you're building a c extension i just want to focus on testing alone because i think that once you go from no automation to adding automated tests that's sort of the biggest leap that you have to make okay with that in mind let's just dive into it first up project structure we're here in pycharm and i've already cloned the repository you can see we have a very flat structure and this is the project that we're working on basically it's called slap that like button and what it is is a model of what happens when you hit the like or dislike buttons on youtube so we have three states it can either be a liked video a dislike video or nothing an empty like state and then we just have some functions and then i have some tests for what happens if you start it empty and then hit the like or dislike button some number of times well for once the code that we've got here is actually pretty irrelevant to what we're going to be doing you can replace this with one of your own files and it's not going to make much difference most of the work that we're going to be doing is just in the setup and configuration so don't worry about trying to you know pause and like look at what all these functions do it really doesn't matter the first thing that i'm going to do is take any tests that you do have and separate them out from the actual library code or application code our tests are going to be in a completely separate place from the rest of the code okay so i've just moved the test to a separate file called testslapping.pi in the same directory for now and imported everything that i need in order to run the test so right now i'm actually depending on the fact that the test is right next to the module that it's testing so that i can import it just by using the name it just so happens that because they're right next to each other this import works for automated testing i don't want to depend on that behavior my tests should run no matter where they are the way to make that happen is to make your project an installable package so i went ahead and created a source directory containing a package called slapping slapping is going to be the name of this slap that like button package okay so here we have the slap that like button file that we had before and then we now have a separate test directory where i put the testslapping.pi file into it now that the things are separated you can see that my editor is warning me that it doesn't know how to import slap that like button so our package is called slapping so eventually it should look like this but currently this package is not installable so we need to fix that in order to make the package installable in a consistent way we're going to have to add a bunch of configuration files the first of which is this pi project dot toml now this is kind of a touchy subject for the python community there really shouldn't be any reason that i need eight different configuration files in order to make a package but that's kind of the state of affairs right now there's no reason that you should be expected to know what these files are or what things go where basically you should just find a project that you already know and trust and copy what they're doing okay first up the pi project dot toml basically in the olden days there was only one way to install packages you needed this setup.pi file but nowadays there are many different alternatives you can use something like poetry or flit or you can still use setup tools pipeproject.tamul just tells us that we're still using the old way next up is setup.pi setup.pi used to be the place where you put your installation script it would do everything that you needed to do in order to install the python package so you can run arbitrary code inside of your setup.pi file since it is a python script and this is seen increasingly as a security risk and so more and more of the code is being stripped out of the setup.pi file and put into one of these other configuration files even though it's basically empty this exact file is what's going to allow us to install our package in editable mode which will be important later on so then where do i store all the metadata of my project like the title and description well all that stuff now goes in the setup.cfg file since this is just a configuration file and not a python script you don't have to worry about it executing arbitrary code here you can see basic metadata like the name description author that kind of thing then down here we have some information about what packages we're trying to package up so in this case we're packaging up the slapping library and to install this library we're just going to pretend that it depends on the request library i don't actually use requests in this repository i'm just showing you so that you know this is the place where you would put those dependencies another important option here is that we're saying that our packages are contained in this source directory then i go ahead and make a requirements.txt file that has all of my dependencies in this case just requests and it's good practice to give very specific version numbers so in the setup i just said requests bigger than or equal to two and in this file i'm giving a specific version so at this point all of this setup and configuration has led up to being able to install the package so here's what i can do i say pip install dash e current directory it went ahead and downloaded all the dependencies namely the request library and it installed the slapping library into the current virtual environment if you're wondering about this dash e and what editable mode actually means all it does is that in the virtual environment it just puts a link to the actual source code directory that way if i make a change to like the slap that like button file i don't have to reinstall the slapping package you can see now that when i go back to my test file i no longer have a red squiggly line under the slapping library pycharm knows that slapping is an installed package and it knows how to import this regardless of the fact that these two are in completely separate directories again we're able to do this because we made our project into a package and then installed that package into the current environment okay now we can get to the libraries that actually do the testing i'm talking about pi test mypi and flake8 for the purposes of this video that means you guessed it more configuration files first up i made another requirements file this one with just the development requirements and those are kind of separate from the normal requirements of the project because if you just want to use this you would only need the request library but if you want to run the tests then you need all this other stuff that's why i put them in different files so i just added these lines to the setup.cfg file this section is how you indicate that a python package has been type hinted so the slapping package is type hinted this pi dot typed needs to be just a blank file right next to the init of the package then here we have some configuration for the flake 8 program that we're going to be using so this is kind of annoying but some programs want their configuration in the cfg file some programs want it in the toml file some programs want their own configuration file and there's no one right answer some allow both some prefer one over the other for each program you just have to look at its documentation and see where you can put its configuration i think the community is moving to pushing more and more of the configuration into the toml file and then just leaving metadata in the cfg file but for now this is just the state of affairs so our flick 8 configuration is going to be here but our configuration for pi test and my pi is going to be in the toml file so i'll just go to the tunnel file and then paste those in i'm not going to go over all the configuration options and what they mean i think they're pretty self-explanatory of course we just added some dev dependencies to our project like pi test so we do need to go ahead and install everything from the requirements.txt now that we have those installed we can go ahead and run mypi on our source directory and see that we have no issues we can run our linter on our source directory and see that we have no issues and we can run pi test and we already told pi test and its configuration where the test is so with my pi and flake 8 they're pretty self-explanatory you just run them and they check your code and tell you if something's wrong but with pi test there's actually a little bit more to it namely you have to know how to use it it's a library and it helps you write tests so let's just go through some of the basic principles of how pi test works and what are its features essentially it just looks through your test directory which we specified in the configuration and looks for any module that starts with test underscore and then within those modules it looks for any function that starts with test underscore and then it assumes that those are tests and runs them in this case this is the only test that it found but we actually have a bunch of different test cases in this one test function so i split it up into three separate functions based off of the behavior that they're testing even though these are all morally testing what happens when there are many slaps happening there's still a lot of different cases that are going on in this one function if this first case were to fail this test wouldn't run any of the rest of them for these cases pi test allows you to parametrize your tests this mark parametrized decorator allows you to feed a bunch of different test cases into a single test function so i specify what are the arguments that i'm going to be passing in in this case the input and the expected answer and then those values that i pass in appear as the arguments in the function so i'm going to have a test input of ll so if you slap the like button twice then the expected answer should be that it ends up with the empty state you pass in a list of tuples that have all the different pairs of test input and expected and then each one of those becomes its own test so with this setup it's the same tests as before but if one of them fails it's still going to run all the rest of them you can also use this decorator to skip a test for instance here i have a test for a regex slap pattern where if you do any sequence of likes and dislikes followed by two dislikes and a like then you end up in the like state but of course i haven't implemented this feature so i'm just going to skip this test you can also use a related function skip if to skip tests based off of things like is the operating system windows versus mac if you have a test that you know is failing and just for some reason don't want it to fail the build then you can mark it with this x-fail expected fail then it'll still run the test but it won't count as a failure for the purposes of did the build complete or not however an expected fail is not what you should use if you're expecting an exception if i wanted to test whether i get a specific exception there's a separate mechanism for doing that so here we have a test case where we're testing an invalid slap the valid slaps are like and dislike you can slap the like button or the dislike button but here i'm trying to x the way to tell pi test that you are expecting a specific exception is by using a with statement and saying pi test raises and then the name of the error that you're expecting this test will now fail if a value error is not raised the last extremely important feature of pi test is fixtures fixtures are what you use if a bunch of tests require a certain amount of setup that they all share in common imagine that you have a bunch of tests that require a database connection it wouldn't be good practice to copy and paste the setup code for the database connection across all of the tests instead you can use a fixture that gets passed to your test by convention if you create a file named conf test.pi and create a fixture in that file it'll be available for all of your tests you can create fixtures just right next to the tests but if you want them to be available everywhere then put them in conf test fixtures are good because they allow you to avoid a lot of boilerplate setup and tear down code for tests and also they can have different scopes so here i use the fixture decorator which marks this dbcon function as a fixture what that means is if i have a test function that has dbcon as an argument name such as this one then what it's going to do is call this function and depending on whether it returns or yields something it will take the either returned or yielded value and put that in as the argument to the function so in this case we can create a database url and then create a connection and then yield that connection to any test that wants it you should use yield instead of return if you have anything that requires tear down code in this case the connection presumably needs to be closed after all the tests are done and that's going to happen at the end of the width block so in this file where i have dbcon as an argument what's going to happen is it's going to call this create the connection and then pass it as an argument here to this function or any other function that has this db con as an argument the default behavior for a fixture is that it will run the fixture code for every single test but if you have something very expensive to create like a database connection then you might want to share that same database connection amongst all the tests the way you can do that is by changing the scope so if you say scope equals session then it will only run this function once and then cache the value and pass the same database connection to all of your different tests fixtures can also depend on other fixtures in this case i'm making a fixture called capture standard out that depends on a built-in fixture called monkey patch suppose that i want to test something like something gets printed to standard out when i run this function well if i actually called the function and it printed to standard out that's gone i have no handle on that a way i could get around this is by changing what the system standard out write function does what if instead of actually writing to standard out i had to just append to a string in a buffer well then my test would be really easy i could just pass the buffer around and check to see the value of the buffer at the end of the test now what monkey patch allows me to do is change the right attribute of the real system.standard out to my fake write function what makes monkey patch so convenient is that everything is automatically undone and put back the way that it was before at the end of the test so i haven't just messed up my standard out and pi test itself can still print say the test results then i could use my capture standard out fixture like this so i make an argument with that name to my test function so it runs all of the code to monkey patch standard out then i just call something that i know prints to standard out like the print function and then i go ahead and check what's in the buffer so fixtures can be extremely useful for setting up before test and tearing down after a test test can depend on fixtures and fixtures can depend on fixtures and there are a lot of really useful ones already built in so at this point i can run pi test and see that i get all of my passing tests one skipped and two expected failures one thing i haven't mentioned yet is this down here this is called the coverage report this is printed out because we put a dash dash curve in the configuration you can see that my tests cover 100 of the source code what this means is that all of my tests combined touched every single line of executable code in my source if this number is less than 100 then that means some part of your code is untested of course that doesn't make it wrong and that doesn't mean that there is a bug and having 100 test coverage doesn't mean that there's not a bug but it's just an indicator if you look into the pi test coverage options you can even get a printout with html and it'll show you exactly which lines are hit and which lines aren't so suppose that at this point all of our tests are running my pi passes pi test passes flake 8 passes everything is good and everything essentially is good in our current virtual environment that's what just running those commands outright does but how do we know that everything is still going to work in a fresh environment or using a different version of python of course to do all this we need yet another configuration file this one is called talks.ini talks allows you to create a bunch of different virtual environments that are completely fresh and install your package into those virtual environments and then automatically run your tests or my pi or whatever else you want in that new fresh environment these first four are just built-in versions of python that talks already knows about their configuration goes in this test end block here so you see i installed the dev requirements and then run a pi test command in any of those situations these last two environments are not built into talks you can define your own so here i define my own flake8 and my pi environments and their configuration goes in these blocks since flake 8 is not a version of python it's just a command that i want to run i still have to specify what version of python is the base environment in so let's just say that i'm going to be running my my pi and flake 8 in a python36 environment then i just install whatever dependencies and then run the command that i want to run once i have this configuration and pip installed talks then it's as simple as just running the talks command with no arguments you can see right now it's creating the package it's installing setup tools and wheel it's creating the pi 36 environment it's installing the dependencies for my project in that environment and then it's going to go ahead and run the tests i'll go ahead and probably speed through this portion okay and it finally finished and as you can see everything passed so three six three seven three eight three nine flake eight and my pie everything passed so we get a nice little smiley face but if there's one thing to take away from this is that that took a long time that took way longer than just running any of the tests individually so why did it take so long well the reason is because when i just run pi test by itself i already have my virtual environment installed but when i run talks it's going to create a new virtual environment and then install everything into that it's going to download from pip it's going to install everything and do it that way so it could take much much longer to run through talks than if you just run pi test by itself in my own development cycle i run pi test all the time that's very quick and very easy it's only when i'm thinking about getting ready to commit and push that i will run talks locally on my own machine if everything passes in all the different environments then it's probably safe to commit and push so talks allows me to run all of my tests in a bunch of different environments but it's all still just on my own machine how do i gain confidence that these tests are still going to pass on someone else's machine this is finally where the automated part of automated testing comes in i'm going to use github actions to automatically run talks every time i push to my main branch the way that you configure github actions is with you guessed it another configuration file you need to create a dot github folder in your project at the top level and then a workflows folder within the dot github folder and then a yaml file within the workflows folder you can have many different files with many different actions we're just going to put all of our testing stuff in one give the workflow a good name this is the name that's going to show up in the badge that we put in the readme here we're using on to specify that we want this workflow to run every time there's a push or a pull request this workflow is just going to have one job that runs the test job we use this matrix strategy syntax to basically create all the different combinations of environments that we want to use so we have strategy matrix under os we'll have ubuntu and windows and python version will have three six three seven three eight and three nine now all of these things are pre-configured github options i can't just put in ubuntu 1.0.0 or something like that it needs to be something that's specifically supported by github you can find a link to all the supported options in the github actions documentation but most likely you should just be able to copy paste from this project to get what you want note here that the matrix strategy tries every combination of each of these things so i have two os's and four python versions that means this is going to create eight test runs then you just define what are all the steps of your workflow so we check out our repository then we set up the version of python with the given python version from the matrix then we install all the dependencies so we upgrade pip and then pip install talks and this talks gh actions then we run talks now i haven't said anything about talks github actions before let me go over that basically github already provides its own set of actions that sort of mimic what talks does and that would be fine if all you wanted to do was run your tests when you push to github but if we want to run our tests locally then we kind of require talks because we can't run github actions locally so someone created this talks gh actions package which basically makes github actions and talks work together as well as they possibly could basically you just have to give github away to correspond your talks environments with the github environments so we go back to our talks.any file and then add this section specific to github actions all you have to do is list out the names of the github python version and what talks environments that corresponds to so for github 36 we want to run our pi 36 and our flake 8 and my pi and 3 7 we just do the pi 3 7 3 8 pi 3 8 3 9 pi 3 9 and hopefully if i haven't made any typos then everything should be set up let's go ahead and commit our code and try a push so here i am at the github repository and now you'll see this little orange dot this little orange dot means that our actions are running so if i click on that i see all these actions let's go into the details as you can see all the tests are running in fact it appears that they're all running concurrently which is good it's not going to take as long it's running the test with talks it's going through the flake 8. and a few of them have already started passing so everything looks like it's running correctly let's just examine some of the output let's go into the talks we see the same thing 14 items 11 past one skips two expected fails and it looks like everything is passed just the same as it passed locally on my machine so now i can have confidence that all this stuff works in ubuntu and windows and all these different versions of python back to the main page of the repository i now see the check mark that the last set of tests has completed successfully but if i look at the readme there's no badge here telling everyone how cool i am so let's see how to do that so here i am in the readme and this is all the code that you need to add in order to add a little badge with the check mark that tells you that your build is passing basically github just provides everything for you you just need to navigate to the correct link so we have github.com your username or organization or whatever then the project name then actions workflows and then the name of the yaml file that you used and then badge.svg so the workflow just finished and now you can see that we have this test passing badge just for fun let's go break a test and then see what it looks like when it's broken so we'll get rid of the expected failure for the divide by zero and then that should definitely cause a failing case so now everything completed and we can see that we had three failures and then github will just cancel them if it notices that they're all failing so three of them failed and then it canceled the rest and we can just go into the logs and we can see the zero division error now when we're on the front of the page we see the red x showing that it's failed and we have a test failing here and that's all there is to it feel free to look at this repository yourself the link is in the description thank you to everyone that made it to the end for watching if you did don't forget to slap that like button an odd number of times and while you're at it go ahead and slap subscribe too i'd like to give a huge shout out to my newest exponential tier patron on patreon dragos crintea i'm sorry i don't know how to pronounce your name that was the best i could come up with i hope you understand i really appreciate the support and for everyone else please consider becoming a patron on patreon it really helps me out thanks again for watching
Info
Channel: mCoding
Views: 22,886
Rating: 4.9873352 out of 5
Keywords: python, testing, ci/cd
Id: DhUpxWjOhME
Channel Id: undefined
Length: 27min 6sec (1626 seconds)
Published: Sat Sep 11 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.