Regression Testing with Touca - Pejman Ghorbanzade

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] my name is pejman so my email address is pageman at touca.io this is our website and um there is try tuka on github with all our repositories um a little story here is that i was a software engineer for five years i was working at a medical software company building a product for visualization of medical images uh we had ct scan and mri data sets at the end of the day there were a bunch of files of up to 20 gigabytes in size and we wanted to visualize them in 3d and then make interactive objects for physicians to actually interact with the data set so um we had six million lines of code and when i joined i was assigned to a project to refactor half a million lines of code with of course with other people but we were building a medical software so it was very important for us to develop this you know make our refactoring in a way that doesn't impact the overall software so um this was very challenging uh to me so i started the site project i worked on it for three years but then because i was so obsessed with it i started this startup called tuka and now i basically left my job and i'm the founder of the startup so what is tuka tuka wants to give developers like ios a way to actually see the side effects of their code changes as they are writing code the idea is instead of waiting after you push your code for someone else to test your code and say oh this is you know failing this particular test case or you missed this corner case which usually takes 23 days for engineers to get that feedback we want to give real-time feedback to engineers as imagine in your editor you go start writing code and then we are going to basically take your code changes find relevant test cases to that piece of code change and then run the tests for that uh you know those test cases and then give you the feedback in the code editor what we've built now um to do that uh basically implement that vision is uh what you see here on your screen i've created a suite in the tuka cloud service which is at app.io and here the platform is giving me an api key an api url that i can use to submit test results to it now to implement that vision that i mentioned the way that you make code changes and we immediately run your code we need to fundamentally reconsider how people change their uh test their code so here's a simple software imagine there is a parse profile function that takes a username of a student and then gives that a student it just looks it up it can be as complex as we want and the implementation of it because it's our production code may change from one version to another so um here we don't want to i if you are very curious this is right now just a map and then you're basically looking up this dictionary and giving the student object and then the student object that is returned it has a username full name date of birth and set of courses now if you wanted to test this with unit testing which is very popular and very effective we would write something like this we would say we are calling our function with a username and then we get the output we are going to test for the different properties and then we are listing the expected value for each output right uh it's very basic i'm sure most of you know this but the the issue here in my opinion is if you want to test uh this function with so many hundreds and thousands of test cases then describing this output is very difficult the expected values for different fields is just going to be too much for us to code and then hard coding all those test cases is also a hassle so instead we are taking a different approach we are creating this uh tool called uh basically this took a test um i'm importing the tuka package and then i'm creating this workflow and saying i'm just going to call this parts profile function with whatever test cases that i give and what i care about is like these fields i'm not specifying the inputs to my test i'm not specifying what the expected values are i'm just saying here's a version of my code and here's how i want to test it so what i did was i i copied this api key and api variable api url i exported them as environment variable so i don't have to specify them on the command line and now i'm just going to run my test i'm just going to give this test case file and say hey give use these three as usernames of the students and uh so revision1 i'm just going to run it and you see that the test basically the test framework is going to pass these test cases one by one to my software and it's just going to submit it to the platform now if i uh go click on version one here i see information about those test cases that i've submitted here you see that they are shown with a spin and icon because they are being compared right now in fact i can click on them and see that once they're compared the values for each version now if i go and make changes to my implementation of course i can go back and then see the differences the platform compares all the differences and then show them in real time um so the idea is for this kind of test framework to be so flexible in terms of the test cases that it gives you a way of you know running them with as many as test cases as you like now uh because i have so little time i'm just going to show you a bunch of test cases that i've already submitted for different versions of that same software uh now you see that they are shown with different icons to say that hey this version 5 i've submitted has some differences against version two which is shown with the star so if i can click on it i'll see you know the what test case was different i can click on that to see all the variables that were submitted i see that in addition to the variables that describe the behavior i suffer i can actually see the performance of different uh functions of that code under test here the platform is automatically comparing that performance against version two which you see here i can actually compare basically change this so that i can compare version five with some other version if i like now imagine that i've i make all these uh basically inspections and i determined that version five even though it's different it's actually the way that i expected it uh to work that from now on i like the uh the software to work as of version five so i can just go ahead and promote this i can even write a message to my colleagues and explain why i think this is the right choice uh and then once i uh click promote the uh software automatically starts comparing version five against uh um basically the new baseline which is which happens to be itself and then it sends this uh email to all the software engineers in my team that i've promoted this version so the idea is for you to have a way of tracking the changes in your uh the behavior of your software and of course you can integrate this with github actions or other ci to make this completely continuous and automated um i there's so much more to tuka i've ran out of time so i thank you so much for listening to this talk and again if you have any questions if you have criticism over the way that you're approaching software testing please you know reach out to me i'd love to talk to you thank you [Music] [Music] you
Info
Channel: SF Python
Views: 184
Rating: 5 out of 5
Keywords: SF Python, Pejman Ghorbanzade, Touca
Id: oBHnLIfI_MI
Channel Id: undefined
Length: 8min 53sec (533 seconds)
Published: Tue Sep 14 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.