Ruff vs. Flake8 vs. Pylint: SPEED TEST

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in my last video I introduced you to a program called rough it is designed to be a speedy alternative to the likes of flacate and pilot and I claimed in that previous video that it was the future of python development I still stand by that statement I think it is the future of python development in terms of code quality assurance however one thing I didn't do I showed you how to set it up and I showed you how to configure it and I showed you know some of the things it could do in terms of some of the more obscure codes one thing I didn't do was show how fast it is compared to other linters and that's because I wanted to do it in a separate video where I could do it properly or I could take my time actually set up these tests and do it properly and this is what we could do in today's video so we have two tests lined up we have one between rough and Flake eight and two separate ones between rough and pylint we're going to start with the flagate one I'll go over all the specifics of each test when we get to them but one thing that is shared across all of them is that we are linting the tensorflow source code specifically the python files obviously tensorflow source code is or tensorflow is one of the bigger Source codes that I know of it has one million 136 213 lines of code at time of recording so there's quite a lot to go through it is a very extreme example and it will show off the linters very well so we're going to start with the flake 81 first and I will say because rough is still in development it is not as capable as either flacate or pilint so for the sake of you know equality between the two and to make sure that you know flake and pilot don't get a massive disadvantage because they're checking more than rough years I've actually gone through and selected everything manually uh so they will both be checking exactly the same things when they run we are going to be using the time command on zsh specifically to measure these and I'll have the commands themselves like this one in the description so in this case we're selecting fe4 E7 E9 B naught b904 b905 and we're ignoring f723 704 and all the others because they're not actually implemented in rough so we're not asking flake it to do them either and if we run that organ error because I copied it wrong that's a good start isn't it let's have to do this there we go and now it will run and there we go it's finished it's given us an awful lot of errors but you can see that bottom line down there we have a time of 56.178 seconds to run all of that so if we now do if we switch over uh to tensorflow rough so this is an entirely new clone of the of the repository an entirely new virtual environment and we run this command instead so now I select an F E4 E7 e9b which is the bug bear stuff we're ignoring f842 for some reason that's implemented in rough and not in flake 8 which is weird if we run that I haven't installed it which would help and now we've got to wait for pyram to shout at me because that's always fun now we can actually run that and it's already done you can see the difference there you can also see that tensorflow has quite a number of errors in it but we don't need to talk about that right now uh so it found 6347 errors in just 3.452 seconds now I will do the maths in editing and I will give you uh the the figure there but that's quite a lot um if I do some very basic maths in my head that is very fast one thing I do want to note is that if we run this again we'll see that it runs significantly faster so now rough is finishing 0.927 seconds if we go back to flake 8 and we run uh that one again we'll see that this one also runs significantly faster and that's because all of the uh the bytecode has already been compiled so python doesn't need to do that again um and it's all you know magic with caching but even then your flaker as you can see is still quite a bit slower um then there we go 19.2 seconds this time around so flake it does get a lot quicker uh when it comes or when you introduce caching um but rough is still so much faster it's now time for the other experiment rough versus pilint according to ruff's GitHub page piling is significantly slower than rough so this will be an interesting one it's also much more of an ass to actually obtain all the codes you'll see the commands are complete and utter nightmares um it took me maybe half an hour just to build these commands to test this properly because we're doing it properly we're doing science and we're doing it properly and we're starting with pilint for dramatic effect of course so pilot does work a little bit differently so if we do time pilint and then we actually have to supply the module which is why I've been doing it for all of these and then we copy paste all this in so first we're disabling them all and then I've gone through there's a GitHub issue completely I think it's 940. I could be wrong about that which um details what of pilint is implemented in rough and all these codes are enabled are the things that are implemented in rough one way or another so if we run that we will get some configuration warnings and then I will cut like I did in the flake 81 so when it's done because I have a horrible feeling this is going to take some time but as you could see I've done it properly one of these codes might be slightly wrong there was a lot to go through um so if one is slightly wrong then sorry um but I've tried to be as Fair as poss as possible one thing I will say actually I decided I'm not going to cut is that there were a few ambiguous ones so there were a few on rough that said they were implemented but didn't actually provide a code that are roughly used in those instances um the rough might check for it but pilint won't I you know any any ambiguity has been handled in Pilot's favor um because I feel like it kind of needs it and there we go we have a time two minutes 39 seconds and 88 milliseconds that is really quite a long time uh bearing in mind that pilot does have the option to paralyze his operations and we haven't specified that so far unless the default is zero in which case we have specified that and pilot just looked even worse um but I will double check that before we run the next test because the next test is full parallelization so we'll see but we need to go back to rough I have completely re-cloned this um and there's a real chance that I might need to uh reinstall off I don't think I do because that should save um so if we do time rough check tensorflow and then we provide all of these options so I'm not going to go through these all at once these are it's a purely plr and plw or all the pilot ones I've just activated them all um so there's a few extra ones then whatever and then all of these other ones are are things that pilot checks but in other codes they're already done so these are things that are already implemented outside of Pilot's stuff if you run that oh e510 doesn't exist so that was my bad uh turns out that e510 is actually e501 um whoops that's been remapped and we finished already 76 000 errors this time well if the tensorflow team ever decide they want to fix their stuff then they've got a lot of work to do but either way that took 2.5 seconds as you can see pretty quick rough is a lot faster um then pilot even faster than uh than than flake 8 which is kind of nuts also it's worth doing it when cached as well so when it's cached it does it in 1.57 seconds if we do tensorflow pilot and we run the command again after already having run it once oh my goodness uh that's the one then it should take a lot less time but I guess we'll see you know it's still not quick it might be quicker but um it's it's not fast at all two minutes 33 it's not even is it any quicker at all I'm not sure it is any quicker at all you know oh that's bad and now on to our final test I did a double check in the help and it turns out the pilot is not paralyzed by default so I was just doing it on one job but it does offer the ability to do it more so I felt it only fair to test it with that it does provide a so that's on dash J it does provide an option where if you type zero it will choose how many thinks it needs automatically and I think for this test that's probably the best way to go for this so oh I don't have the end installed that would probably help wouldn't it because I am takes its time with these things right off we go no it doesn't spam your Vlogs as it goes um because it's splitting it out to all different jobs it will give you them all at the end apparently um so we'll see how much faster this is and we'll see how fast it can pass the rough while we're on the topic of parallelization I actually don't know if rough paralyzes by default if someone does know um then do let me know because I couldn't find anything in the docs or read me to say so we get a lot of a lot of warnings here um but I don't know if Roth is paralyzed by default so yeah if someone does know and then please do let me know okay and it's done and turns out parallelization does speed it up ever so slightly I don't know how many cores it used um it did spare a lot of warnings it's because there is a version three coming out soon um but it is is quicker look at that a minute and 55 now so it's only like 70 times slower than rough instead of like what 85 or something I don't know again numbers would be on the screen properly and I'll do a little summary at the end once I've gone through and edited and verified on the numbers and everything but uh yeah I think it's plainly obvious that rough is a lot quicker than both flacate and especially pilant and that does matter and these um these cash verses on cash tests do matter as well because locally everything is going to be cash you know the pie cash is going to exist if you're on a Nissan CI chances are you're having it check out the code every single time you run the job meaning you'll be on the the uncached version meaning you'll actually be taking the Raw results rather than the cached ones and rough is even faster relative to its competition um in that sense and especially if you're paying for CI minutes as well will save you quite a bit of money in that sense obviously if you're doing personal projects on GitHub it doesn't necessarily matter but it's still nice to have a um you know it's nice not to have to wait uh for as long so I will cut to me in editing to give a little summary of the results just so we have them all in one place and we can compare them a little bit easier so I've looked over all the footage I've actually already edited it all and this is just a little summary section just to get how much faster relatively rough is to the other libraries because I feel as though it was quite obvious during the actual recording how much faster it was or that it was faster but this is going to see how much faster it exactly was using maths and science so in our first test we compared rough to flake on the first run and rough was 16.3 times faster weirdly enough after it was cached rough was even faster relatively at 17.8 times faster both libraries were a lot quicker this time around because they both cache extensively but rough was just that much faster even though flake did shave a solid you know 40 seconds off its previous time pilint was a much more sorry story in its original test on the first run test that is rough was a very nice 69.5 times faster than pilot and caching just made things worse apparently pilint doesn't really cache at all it turns out pylin was slightly faster on its consecutive run but that didn't stop broth being 110.1 times faster than it to put that into context we're off only advertises that it is a maximum of 100 times faster on linters and in our little test here we've shown it is 110 times faster which is kind of nuts the parallelization test for pilint was a little bit better it was about 40 seconds quicker but rough was still 52.6 times faster even with all the parallelization again I don't know if rough does parallelization on its own if someone doesn't know please let me know but yeah you can see these results are quite extreme flake was already one of the faster linters pilot was the slowest so you're not going to get any figures faster than that and it's worth keeping in mind both sets of scores because obviously as I said before in CI you're going to be looking at first runs local you're going to be looking at Cash runs um so all of these numbers are very important here as I said before the commands are used are in the description below for those of you that want to reproduce these results feel free to use those same commands I've been using python 3.11.2 with flake 8 version 6.0.0 pilot version 2.17.0 and rough version 0.0.254. of course if you like this video and consider liking it to let me know and maybe consider subscribing if you want to see more videos like this if you have any questions about what we've seen here or any ideas about what you want me to do in the future make sure to leave a comment below I read them also the feedback is greatly appreciated if you want to support this channel monetarily you can do so in one of two ways the first of which by becoming a member using the join button the second Rich by becoming a patreon user Link in the description one panel Multan either and if you to be on the screen like these people and I will see you in the next video where we tackle meta classes are gonna be one of Python's weirdest and most confusing traits so I'll see you for that
Info
Channel: Carberra
Views: 2,818
Rating: undefined out of 5
Keywords: pyfhon, pytho, pytbon, pytjon, ptyhon, pytyon, ptthon, pyyhon, pythn, pythoh, pythpn, ython, pytgon, pyhon, pytohn, phthon, oython, pthon, pyghon, pythoj, pythno, pythkn, ypthon, pytuon, lython, pyrhon, pythom, pythob, puthon, pgthon, python, pyhton, pythln, pythin, pytnon, pyton
Id: o57IWZTM6fk
Channel Id: undefined
Length: 15min 26sec (926 seconds)
Published: Mon Mar 13 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.