hello it's Duncan this week we return to comparing the HTTP for K and Kor HTTP libraries specifically looking at throughput in request per second I'd like to be a to say stay tuned to see which is faster but the truth is that on my laptop at least both seem able to rout and process requests faster than the operating system is able to accept them so instead stay tuned to see some interesting benchmarking code that should come in handy next week and the answer is 19,00 request per second well since all the fun with Gradle I have actually got our HTTP framework test project running with cotlin 2 here and with jdk let's have a look jvm tool chain of 21 one final bit of gradal Mattis I'm not able to work out is that this cotlin version here which is the one set in this gradal properties is 2.0.0 and we're also using 2.0.0 for the cotlin plug-in 0 ization here so it would make an awful lot of sense to use cotton inversion in there I think but that um just doesn't work and the reason being if I hover Val cot inversion string can't be called in this context by implicit receiver use the explicit one if necessary I have no idea what that means you can see I've even turned on the receivers here so I can see what it's saying but I just can't see why I can't reference that but anyway I'll put up with that and let's go and look at throughput test in a previous episode what seems like a long time ago but I will link to it somewhere we wrote this request Lots HTTP and run Lots functions the job of which is to throw lots and lots of concurrent requests at a server I Ed that in this throughput tests where we create some orders and some customers and we build some HTTP K roots for those orders and customers and then we create a request that's able to go and fetch on that route we have a method that allows us to check the response which basically says we should be getting back and okay and we call this request Lots HP fork with a request count which is a th000 requests this Handler say we want to run under Jetty this is the request that we want to make and this is how we check the response and in here request Lots HK we create a server according to the type of the server we asked for here we start it up and and then we use okay HTTP to throw a lot of requests at it so if I run this you can see a th000 request it's quite fast and we made a th000 request in 0. five of a second at 1,800 request per second and this little bit here is a summary of any errors we got and so there were none there's a little bit of irritation that we're starting the server here on 8080 and that matches this 8080 here but that's fine it does remind me though that I did have an interesting bug which is if we get any errors in here actually like um well it's fact they there was an error in here and run this then everything goes away basically forever and it would be quite a time to debug this but it was my fault let's have a look it's caused in fact by this while true in here we put this in an attempt to retry HTTP request that failed but obviously we should probably never have a while true in production code especially in cases like this where we might always get some sort of trouble so what should we do about that well we should either set a retry limit or retry time I think a retry limit might be easier so we could say in here for and counting down is easier so let's say five down to zero now if we use that as the block now if we get an exception we can always add it but we can say if I equals z then we want to break out of this Al together if we do we don't actually have an R to return so it better been an exception and maybe we could throw too many retries giving it the last issue that we had what would two new retries look like well if we create a class in here you can see we know it's a throw ball I think probably it should be error which is the type of throwable we shouldn't normally catch and this is our cause that we can pass into the cause of an exception and maybe we should give it the message to many retrice oh messed that up of course just check what behavior that has ah and we do need a return here somehow now we know that this will always count down to zero so we know that if we were to get here that would be an error I'm going to say bad retry logic try that and now with our error we get too many retries and that's caused by this illegal State exception which was our hello which came from our where is it here so if I take that out we should now be good if this run Lots with production code rather than in our test Tre I think I might want some test around it itself so let's just remind ourselves how quickly that ran and the answer is 1,900 requests per second well that's not bad but I don't think the jvm has had time to warm up so let's repeat it a few times the simplest way to do that is with repeated test rather than test so we say repeated test and say we want to run it 10 times and try that see every time we stting up a new server that's okay let's have a look at our output okay so repeated test was convenient but it's not very easy to decipher what went on we can see that we were getting errors here connection reset by pier and we can also see that we got some quite large numbers in here compared to our first one which was 2,000 odd so instead of running these repeats as individual tests let's put them into a single test so we'll return this to just plain old test and now if we say reports then we can take a range say one to 10 map that and we will get a list of reports and now we've got the list of reports we can say reports for each each one of those things is the report which means that we can do that with it let's have a look so now we have a whole lump of our servers and then when we're done here we go down the bottom of here okay so you can see that we had 2,000 requests a second and then 4,000 drop back down to three then eight then 18,000 and then we started getting errors so this looks like the effect of hotspot because we've called the code paths a lot it's optimize the code in those cths and this I think is the effect of the Mac TCP stack really not coping with the volume of traffic we giving it now remember that when we get these errors we retry and that will drop our throughput down last time we were running this code we discovered that if we backed off for 30 seconds or so then these errors would go away I don't honestly know what to do do for the best with them what I think I might try is just leaving them in there and seeing what the highest number we get here is before we start getting errors I should say at this point by the way that we're effectively just testing the server here so we're starting with an empty list of customers and we're seeing what happens with an empty list so there's no serialization going on there's no nothing at all really just rooting while we try and decide what to do for the best let's reproduce this test with Cal so I can rename this one to test http okay and let's duplicate it as test Kor now obviously we don't want to be calling request Lots HTTP for K we want the equivalent for Kor so let's duplicate this as well and think about what it would look like for Kor so we say request Lots Kor okay we would have a request count what about an HTTP Handler well for HTP for K we take the Handler and map it to a server the equivalent thing for Kor is if we look in our main application here we create an embedded server with a particular type of Base server or whatever you want to call it on the outside and this module here seems to be the equivalent of a Handler so I think I'm going to copy this go back to through put tests and I put that in there and this module then is application to unit so let's change this Handler to be application to unit and then I think we might be able to say if I cut that out of there I'm sure there's a better refactor for this that this is Handler if that were Kor application and then that would let me put we could say request slots Kor where the Handler is that thing not that in fact let's still we can make that there and call that Handler and then we need to tell this its type which is application taking nothing and returning oh my goodness remind me unit uh unit because it mutates our application right not a very satisfactory refactoring so far but let's go back down here so that's our embedded server and the HBK version we were starting on so let's just do that there and we can start it with hpk we were creating a server and then using it I wonder whether we can do do use here uh no it looks like a server isn't closable so let's remember the server and we'll start it and now instead of this Ed block I think we can just basically say try and then finally server do stop so this is start up our server and then we stop it here our server config has become unused that's equivalent this netti here let's create a parameter for it yes that's Netty let's move it up where's it gone these are always the wrong size for some reason take that move it up to be the same as server config and then remove the server config and I suspect that we don't really want this to be a nety necessarily we want it to be some other type that I'm not sure I can be bothered to find okay so we're still using hk's request and response here in order to use HTP K's OK HTP client I'm not sure I mentioned in my previous comparison that one of the joys of htbk is that the clients use the same abstractions this request and response as the server does in contrast our custom rout HBK tests here use the Handler to process a request directly whereas the equivalent in Kor has a client here with a completely different API so we say client get and so on rather than creating a get request and passing it to the client it's a small thing but I certainly prefer the hpk version anywh Who coming back here we can now try and run test Kor and see how we get on server starts up h wonder where my output is gone okay ah hold on kill that I think in here we said wait is true but I think we will not don't want to wait there so that we can carry on going down to this code go again Ah that's a bit better okay we're getting a lot of debug out there and that's going to slow things down quite a bit I should think let's see what our result actually was if we scroll to the bottom goodness me yes there was a lot of that okay here we go so we managed 3,300 before we started getting errors in fact even quicker than that including the retries I think though we should get rid of all this trace and no doubt there's an XML file somewhere in the world that affects logging let's have a look uh resources ah Lo back there we go so I think if we just took the appender off Al together um we still seem to have quite a bit of info starting at the beginning but we have lost all the logging for the rooting by the look at things and here we are we've managed 5,000 odd requests per second in fact 6,000 here before it started going wonky let's pin that one and go and run the HBK version to remind us how that was doing so that's this one here run that okay there we go and how did it do well we got up to 9,000 or so which seems a bit higher and the same sort of issues here with connections being reset by peers and also in fact now we got connection reset and broken pipes let's put these side by side and think about how to compare them now in both cases things start off slow and speed up as hotspot gets involved as they speed up or maybe just as the total number of connections we've made increases sooner or later we start getting errors in this case only a very few uh but later on the errors seem to get worse and worse the peak here is higher for HP for K hard to know what to say about the average these numbers here are higher for Kor but HBK was getting more re tries there which is going to lower its numbers I don't really know how to treat this sort of statistically what we really want is warmed up runs without errors but once we've warmed up it seems we start getting errors what's not clear to me I suppose is whether or not it's a cumulative effect that get the errors or do we just start getting errors as soon as we get up above a certain rate okay while I think I'll just refactor let's first of all pull this as its common out of both of those and the same with check respon response so it shows that we're doing the same sorts of things and now if I make this Lambda into a value then I think I can take this thing and make it into a method so this is going to be I do it and yes there is another one thank you so now these both look the same so they both called do it where this is generating the report by actually making all the requests so I think if we refactor this to pull that down then this is we're going to maybe do run or maybe just run I don't know what yeah that'll do right so this now we can in line that now comes out of parenthesis and now we can work in here to decide a common policy for what to do when we start getting errors what I think I'm going to do is I'm going to make this into a Lambda can I do that no okay but I can say that instead of that we'll say run it like that and now we can say that this is the report that we need to yield back to the block but now we could say that if report. errors is not empty now what I wonder whether we' just wait around because what we found last time was that sleeping for a bit helped clear a the errors so I'm going to give it 60 seconds of sleep and maybe now we can run both tests and see how we get on let's run that move this back over here and I think yes that is running there so let's just wait well that was taking so long I gave up on it what I'm going to do instead of reporting everything well I will report everything down here but I'm also going to say do this print that in here where that is oh well I want the whole of the width report let's do that so that will let me see how many errors I'm getting as we go along it's not nice but it's a thing so here we have Kor warming up oh up to 8,000 requests a second now we start getting errors so now we'll back off I'll go and make a cup of tea okay then here we are let's have a look this is HD K and it's a bit confused because we're showing them twice but here is the test run you can see we got up to 14,000 requests per second but even waiting 60 seconds after we start getting errors here isn't enough to stop us getting errors here having a look at Kor well the peak Isn't So High we did get 11,000 there I think given the errors maybe all we can say about the two Frameworks in this case is that they're both too fast for the TCP stack that they're sitting under to keep up I have a sort of vague impression that HBK is faster especially as these numbers are high despite the retries but I'm certainly not going to claim that that is statistically significant a phrase that's difficult to say when you have teeth like mine so having establish that we can go too fast I suppose we should look at the behavior when things bog down which is say when Downstream system are slow and the libraries are having to wait before they can respond that seems though like a topic for next week so if you'd like to see that where I guess once again we'll be comparing threads with co- routines then please subscribe to the channel click the thumbs up so that my likes per day rate increases and you can increase my sales per month rate by buying the book that I wrote in that price called Java to cotlin refactoring guide book details of which are in the show notes below thanks for watching