Headache-Free Reactive Programming With Spring Boot and Kotlin Coroutines

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
oky doie yeah you already got a short introduction just to recap I'm where Peter this senior software engineer with the gray hair my daily job I try to be as much as possible with my hands in the dirt with gray hair that's always bit a challenge right because they expect you to do more managerial stuff and Lead stuff but I really fight hard to be able to code on a daily basis you also see me often on stage I'm also a trainer at T Academy where I foremost train cotlin so I'm also a cotl certified trainer by chat brains where I help actually people in all stages of their cotlin adoption mostly server side so beginner intimidate but also Advanced when you look at my career started with Java about 10 years ago moved over to Scala and since quite a few years do coplin and throughout all these years also right from from from scratch with scava I did stuff with reactive programming I had a lot of headache in Scala and eventually in cotlin I had this revelation of cortines and I think a big part of what I'm going to share today also you will see back in my career and that I think finally we fit really really The Sweet Spot so I assume not all of you have done something cortin so I think it's helpful to quickly recap why should you do something with cor with basically reactive programming and later core routines and where is the headache that's what the the this webinar proposes so I think we need to have a good understanding where the headache is and the example I will use through my presentation is what you see here very simple example where I have some endpoint in Spring boot store users I fetch data from two endpoints random Avatar where for email soon some remote blocking calls do some validation eventually I save my user in a database using for instance GPA when I look at this particular example it's very clean there is no additional F about it so you might think why should you ever need something else well if you put this particular piece of code on the load then you might see why I need something else what I've done is I have created the exact same application here in Spring boot with what you see here and besides what you see on the slides what you also can do is you can add an artificial delay to the remote calls so using this delay I can kind of yeah postpone the answer for the given amount of time and we'll see why it's important later on then I can call my endpoint of course that's what I do here and you see I get a response back I also can time the response that's what I do here and then you see how how how much time it takes and when you look closely I have now a delay of two on milliseconds and we see that it takes about 400 plus something milliseconds and that's kind of important to realize why well kind of logic right because we do two remote calls and a bit of overhead for storing my user in a database adding up to about 400 milliseconds plus something so far so good now the interesting thing comes I put load on my system and instead of having only 200 milliseconds delay I now have a delay of two seconds so I really increase the delay significantly in a healthy system I would expect that my average response time would be what do you think always good to think well you guessed it right you expect 4 seconds plus something for storing user database right that would be the ideal situation and I do this by the way with 100 concurrent users and a total of 100 10 requests so now this is busy to bombard my system and here I get the response and this is actually the important section what you see is that the minimal response time so the best one was 4 seconds plus something so that's fine but but the average as you can see was 18 seconds 18 seconds well that's not a healthy system right to be fair I have to kind of show you that to create this dramatic effect um I had to tweak things I had to reduce the amount of threats I'm using here but no matter how no matter how many threats you have eventually when you have you North load you always reach this particular point so the question is then what happened so it's important to understand these kind of architectures to understand what happened with the Tomcats the the the server containers out there normally have a thread pool and when the request comes in it takes a thread out of the pool uses the thread through the whole call chain so we then call remote service we call our database and whenever the thread is waiting it just sits there and waits for a reply when it gets replied continues which then eventually um leads to returning response what happens now if one of these resources if the end if these end points become slow what we exactly mimic with our two seconds right well then if you have a lot of Flo what can happen is that your threat pool exhausts no there are no new there are no threats available for new requests because they're all waiting because they're uh doing something remote that takes a long time which leads us to the very un convenient situation where we cannot accept new requests well this is kind of the best recipe for unhappy us users just not be responsive at all and on top of that what you also can relever here is parallelism right so this this model is inherently sequential so using parallel construct is kind of hard that problem is not that new right about 10 years ago you had this movement of reactive programming where they try to come up with a solution for this particular problem what you see here is basically a rewrite of what we see in the beginnings using webfx barebone webx with the monos that does more or less the same except for one thing it does something in parallel can somebody see what's where the parallelism comes in here just type it in in the in the slack if you see the method that does the parallelism I I can guess but it's very well hidden yeah okay exactly yeah where is it a flat map actually it's one line above it's Zip Zip all right because you take two monos right zipping meaning yeah you do them kind of in parallel and then you collect the both results in the flat map and the flat map you need because we actually save another user which is another mono and you don't want to have monos or mono so this is where the flatman coming in and you see that's already that's already quite an issue but more that late for now we just focus on what happens if you put load on this code right so we just looking at the result not how we get there there for a moment so I have again my program here then I have here the um reactor example so this is the the timed example and you see that now we have for sing request already better performance because you only have 200 milliseconds plus something because you can do things in parallel so that's already kind of a win right that's the win and now of course we want to see what happens if I put this stuff on the load with the same kind of configuration like on the left with two seconds delay pushing 100 100 100 requests concurrently and now we see that this is the result we want so the main is two seconds the average is average is now two seconds that's what we would have expected in the prior example two except well the prior example at 4 seconds because we couldn't do Paralis here we can do parallelism and because of that we expect an average of two which we get so why do we get here a way better result also here it's good to understand how these architectures work so you can kind of um reason about this much easier so what's interesting with reactive programs is that that they actually have less threats than the ones we see before so they use less resources you have about one threat per core instead of these 200 threat we had in the previous example but the way to handle request is very different so when a request comes in every request is taken up by threat from this pool but whenever you hit an IO boundary at least when it's non- blocking then the threat does not just sit there wait for reply no it goes back to the pool takes on new work and goes on once I get a reply and the same threat or another threat is taken from the pool and continues processing and the same is then happening for the database up to the point where eventually threat will return the reply so what happens now if a resource becomes slow right with the two seconds we mimicked well U not much happened right because the threats that kind of um take on our requests are still available they're not blocked waiting for something and besides that we also now can leverage parallelism and these two factors together leaves us to happy users there is one important thing you have to realize which is the following what you really must do with active programming is do everything in a non-blocking way if you do something in a blocking way then you get a deterioration of performance even much faster than in the thir example because we now have much less threats available so if if some of these precious threats are blocked well then we have then we have a big issue so you have to ensure that everything you do is non-blocking however sometimes you have a blocking Library so you have to interface with something that is blocking um the good news is is in case you have this you just have to ensure that you have a separate threat pool it's often IO pool that facilitates these blocking calls and I hope you figure that out uh before you're in production right because should you be in production and you you get these issues then well you're you have a problem so from a realtime perspective we saw that the reactive really behaves perfectly that's awesome but is it the answer to all problems the big issues is that reactive programming ties us very very tightly to their abstractions so you see this code here is about monos it's not about our business logic anymore so do monos dominate our whole code base which actually means that our business intent is lost and besides that we also can now not use normal programming constructs anymore for instance I cannot throw an exception I have to wrap them all the time in mon error and so on besides that it's also very easy to shoot yourself in the foot I have a good article of a colleague of mine who kind of describe 10 pitfalls it's so easy to make things wrong like for instance forgetting to use a um additional thre pool when you do blocking stuff or call the dot block method on a mono which you just can and in unit test that's fine but on the load you get the issues and we should what's also important to really be very conscious of is why do you embrace this reactive way of doing it's because of these things resource efficiency little parallelism and maybe reactive streams if you use them so because of these kind of benefits you sell your soul so to say to a toolkit that will really dominate your code base in a very nasty way so we can say RAC programming does the job yes but accidental complexity is just tremendously high it's too high worse and we we have a couple of questions I don't know if you probably will address them actually in your presentation but maybe it's just a checkpoint so there is a question by Amit um is there any threat listening for the reply uh from the external resource when we do this in reactive manner what happens when we have to wait in real application with real you know external resource communic yeah so what you mean is actually when when uh stuff is coming back here so basically uh such an arrow right when yeah one of these arrows here exactly um well to be honest I haven't really done that deep I would expect it to be to be a single threat that kind of because there's a callback involved here right so you put an call back you offload your network layer on the OS actually there everything is kind of blocking you don't have actually asynchronous way of doing on on on that layer so this callback is placed there and once the call back replies it must be picked up by something so there must be there must basically be a threat that picks it up and kind of signals to another threat hey there is a payload available for you continue processing yeah exactly and another one um would be would we see similar numbers with reactive sty with a threat count of one since we have a relatively low number of requests I think it depends again um that's correct uh that actually depends on how many cores you have right when only have one core basically one threat should be enough but when you have more cores then you wouldn't leverage your infrastructure and when your application is highly IO bound so actually the threading part with reactive gives you a lot of Advantage when you're IO bound rather than CPU bound right CPU bound means that I have a lot of Cycles in my application using CPU for that ioba means I just offload and when I have get something back I do something offload again so when your application is highly iob bound then with one threat you can do a tremendous amount of work there also many databases out there um like for instance redis which is single threaded but because they only do IO that just works fine but in an application if you do some processing maybe some crypto or or well some business logic you process then with the single threat for many requests probably yeah it wouldn't work that well so you need a bit more than that so I think aristic is about one or two threats per core that's kind of aristic which gave good results yeah good and another one by AIT uh in terms of comparison probably would vertex have the same issues as web flug yeah vertex vertex is an actor based framework I've done a lot of stuff with AA which also an acor acor based framework which um do not advise personally I really don't advise user because it's it's a tremendous amount of complexity based on the idea of message passing you pass messages you get messages back I found for instance just seeing um Stacks F them what's happening your system that's already really a very complex Endeavor without a lot of correlation logging and so on so it I said it's message passing it's not really that you use Futures but to really follow a certain code path is already very challenging so you get similar issues as with um with reactor not exactly the same but similar ones okay okay yeah I think the question which is mentioned here we will talk about but let's continue I think there's so many questions but maybe just we we get more bit into get a bit more flesh and then I think maybe Al so many questions will also be answered one of the answers is this one so cortines really can solve this problem in a very very neat way so what are cortines well if we look at a compute architecture we see at the lowest level there's our CPU and the CPU is the real thing right this is where the work is happening on top of that you so- called kernel threats and they fight for a Time Slice on a CPU this is why you can do quite a lot of work with a single CPU because when you have multiple threads they all get a bit of work so when you have a machine with one CPU you get the illusion that you can do things in parallel even though CPU really can only do one thing at a time and current threats are quite expensive so you can have about 4,000 per GB of memory the gvm has platform threats so the normal threats you you probably know and they're mapped one to one to Kernel threats the cool thing about cin that's a layer on top of platform threats which are very lightweight so you can have about two. four million cortines per gigabyte of memory in the end also cortine needs to be executed by a threat because whenever you want to have stuff done at TVM you need a threat there is no way around it but a cortin is not bound to a threat so one threat can ex many quines and the Corine can be ex by many threats here I have an example of creating 100,000 threats sleep for two seconds and this would be the counterpart with cortines what you cannot do with cortines is simply create a cortine like you can create threats so there's no Constructor which says new threat new runable and then run something do start no you always need a builder method for that and the builders we use here is run blocking more on this one later that's an important one the the in this particular example the lounge is conceptually really the same like threat so here we create an asynchronous process this is what what what we doing here and then we have actually the next kind of snippet which is delay and this is the mind-blowing difference between threats and cines so delay suspense rather than like in threat sleeps and blocks the threat so here nothing will be blocked whereas here we blocked more on that in a minute so my next question would be what do you think happens in the first example and what you think happens in the second example maybe someone can type that no one dares okay then we will just try it out as usually as usually the answer is it depends in this case uh that's the good news it doesn't depend yeah that's right depends on how much memory you have right so let's assume 2 gab of memory 2 gab of memory it actually uh depends on the Java version as well I have checked that definitely definitely yeah yeah but I mean the end result uh is more or less the same in the first example let's run it so that's what we see here you get this quite quickly which is an out of memory error right get our memory about 8 about 8,000 threat I can spawn with two G memory which is kind of a with my slides where I said I can have 4,000 threats per of memory then for fun let's do the ctin example and we see there that well it's just a piece of cake because we can have 2.4 million per gab of memory so that's the conclusion at the core of this cortines universe we have the suspend functions and suspend functions are just normal methods that take arguments return types but they get the special capability you can pause them or suspend them and resume have them resumed at the later time without blocking anything that's the cool thing about suspend methods but they also have some constraints suspend methods so if we have a layered architecture like you see here controller service or proc client what you cannot do is start out with a normal codly method and then call a spend method that wouldn't simply not compile what you can do is use run blocking for that and we'll talk about that later because I see this happening very often in practice and and people are not always aware what actually doing so we're going to look at that in detail the way you should do it is start from the beginning with suspend method and this is something your framework needs to support that's that's not something you can kind of build in yourself that's the framework that really take something from the socket has to ensure that right from the start we have such a context available to call a suspend method which is cin scope you can call normal methods from suspend methods that's fine and should you do something blocking then you have to ensure that you again like with the reacting example need an additional thread pool and this would be dispatchers iio also more on that in a minute now we're going to look at the most important ingredients of cortines because once you understand them then I think you understand basically about 70 80% of cortin and that's great right these are these Builder methods and you see they have two building blocks which is cortin scope and cortin context so first looks let's first look at the Corin scope in this example here we create a thread and what you see is in case I want to kind of synchronize different asynchronous processes with threads I have to join them manually which is kind of very evil because when you look at this example we do two evil things first we sleep which blocks a threat and second We join which blocks another threat so with a lot of blocking we get our work done in quarantines because we have this quartin scope it looks more or less like as follows that's a bit a fancy signature here suspend cortin dot probably the most fancy you can you think of in cotlin what this means is when I call run blocking what I get here implicitly as a receiver in the right terminology is a cortin scope and this Corin scope is my parent scope based on this parent scope I can launch another cortine which once launched has it another scope which is a child scope so what is the kind of idea behind those Scopes well every parent scope supervises the completion of the child Scopes so the parent always knows that its children no man how deep you Nest them no man how process you spawn it always knows whether they are completed or not and that's this has these fancy terms structure concurrency and because of that we do not have to synchronize processes ourselves it's kind of part of struct concurrency which makes concurrency so much simpler to handle it's really a wonderful thing Str concurrency so two benefits first we don't have to synchronize manually and second all this happens in a non-blocking fashion that's really awesome about cority except for here our R blocking but let's just leave that aside so within this block there is no blocking going on there's also another uh Builder we will also use in our live coding session then um which is very similar to Lounge so whenever you want have a value back you shouldn't use Lounge Lounge just spawns a process like a thread and then it's kind of doing its own thing it can really give your value back if you want to have a value back you should use async in combination with a weight so this is what we uh what we see here so once um this delay has passed then print line would be executed and that's also very nice cutl so even so a weight is within another Cod block the whole Cod block wouldn't be executed only once rabbit await would have a value available then this block would be ex so here without any manual synch synchronization we just can do kind of sequential programming whereas at the same time we can do things in parallel okay then the next building block is cortin context and probably the most important part of a context is that it gives you access to the thread pool that will eventually execute the cor and it is a bit more than that we'll also see that in a minute but I think that's the most important one so another simple example here I have blocking and a loue to cortines by default I get this empty quotin context what does that mean empty well it means that it uses the threat that was calling this method so if this would be main meth would be the main threat so it would use the main threat to loue those coroutines and then we do have these two uh delay calls so question to you how long do you time how how long do you think it takes to execute this method more or less and I know it depends but more or less right the question is is it is it 500 plus milliseconds or 1 plus second that's actually the question what do you think so it's less than a second or more than a second yeah well to me it seems that it's less than a second yeah I see 1 second plus I see five millisecond so the the battle is on I would [Laughter] say 500 yeah I think 500 is going to win 500 plus is going to win very good and that's correct why is that well you only have one threat but the threat that does stuff with cortines is not blocked because we here use a suspend method so delays a suspend method and that means that the threat that execute this code is never blocked so it simply would Lune delay it hits the suspension points okay you suspend it goes It goes back to see okay what other quines I have to to spawn the second one be spawn again delay would be called something will be sced once the sced time has passed the threat would again be taken to continue and so we get about 5 500 plus seconds cool what's really awesome about this courting context is something you're probably not very aware of which is that it's propagated automatically so once I Define a cortine context and I use Lounge to lunch or cines or our sync I automatically inherit the courin context from a parent so I do not have to redefine all over again I can change it if I want but by default I simply inherit it so now let's do a little twist here instead of using delay we use we use sleep which blocks a threat so how long do you think it takes now well well now it should be blocking because it's threat sleep I think it takes more than one second exactly that's exactly okaye so we have again only one threat to handle all the cortines it would loue but here it it hit sleep and then it freezes so it cannot Louch a second quinee so it have to kind of wait till this one has been completed then it launch the next again freezes so this is why it takes 1 plus second well can we solve this problem yeah yeah there is a very good question by Amit again he is following um what what is where is the counting happening like who counts those 500 milliseconds then yeah so it's actually not counting it's scheduling maybe you've used a lowlevel scheduler in in uh in Java you just s after one second I want this to happen so this is basically how it works so there is a scedule that says after 500 milliseconds you have to do something and this is where things wake up and indeed there is also threat that kind of checks the scle is there something to be done and this is more or less how it's happening yeah good question indeed nice so the question is if I have this blocking way of doing can I fix that can I still get to the 500 plus milliseconds well you can using a dispatch IO and that's exactly the same what I showed in reactive programming a separate thread pool for blocking operations that's exactly this particular case here okay so that was actually just a very short recap and of course today it's about spring boot you want to quickly go into spring Boot and I chose to show you how you do quines the wrong way because I said I see it happening so many times I think it's valuable to know what happens when you embark on it in the wrong way qu kind of quick we do that and we again use this very first example we've seen in the beginnings now it might be that your PO comes to desk and says hey this Endo is so slow um can you not kind of increase the speed of it and you think okay yeah well um these two calls I could do in parallel right so let's try that and maybe you've read a blog post about core routines okay let's try it out this is what we're going to do now a bit of live coding so this is the piece of code then you probably heard of this thing which is called async right um and what we also do we're going to uh lck some information about what we're doing EXA this this would be asking and then it says hey uh I need a ctin scope H there is no ctin scope so then you might might also read something about these run blocking okay let's use run blocking because well then then it compiles the compiler is happy yeah fine then well with asnc I still need to do a weight right because this is how these cortines work you also just seen that in the slides and well that seems like a good idea right so now I can call these things in parallel okay interesting I have the the boot spring boot AO restart enabled so um let's call the thing like we did before say curl time blocking and now we should get with this delay of 200 millisecond something about below 200 milliseconds right that would be the idea at column one so what happened that's right so that should be like that and we see hey we don't get beneath the 400 milliseconds H I do things in parallel but it doesn't do things in parallel why well maybe you can answer the question yourself with what we just seen before what we see here is that each time we execute this piece of code we use the same thread right I'm blocking as this empty cording context we choose the threat that was calling this particular method and here we do some blocking remote calls so all these calls here random Avatar are blocking call so the thread will be blocked right here and so not being able to do the second AR sync currently what we get and this what you point out already now so you see here I generate some MDC so this will be nicely Bel locked the MDC so that's what see here W and the MDC that works kind of well you see the MDC is everywhere the same then again might have read a block post say okay uh when you have some problems with score routines it doesn't do parallel simply use this dispatcher IO okay let's do that and see what happens now spring boot has restarted let's hit it again okay you see it's better it's better yay I I I got finally my parallelism wow okay let's look at the cob oh wait a minute so I use different threats you see for one request it's five and four but my mdc's gone H how is that well if you think about that logical because the MDC was on the calling threat but I use different threads to execute those statements so MDC which is on thread local is gone there are some ways around it you know codling things about everything it's really awesome so you have this kind of MDC contact thing which is a nice bridge between propagating state from local to a COR team so that would bring my MDC back if you restart it and so I do one call and uh let's see what the results so you see I have my MDC back but the real problem here is that even though that looks now awesome we haven't solved our problem if we I again put load on the thing like I did in first example I still have the same issue because my thread pool eventually might exhaust so I do not get the performance I really want at least that's what I predict right now let's let let the the result talk about it and I think the result doesn't lie so here it's mean even 12 seconds maybe if I retried it would be two probably but average is still you know the the way too high amount of number this is what I call the the very inefficient way of doing concurrency by simply throwing resources at it this is how you do concurrency more resources I it okay so that was uh that was the bad way but the way um there is even more evil to it for instance transactions you have transaction context on a thre local which is also not propagated you could do it yourself there are no tools for that but the problem is if you do something with a database in these assing calls the way these Frameworks work is that they're not really concurrent they're not Ena they're not um able to handle concurrent calls for instance for one database session they're not just made for that so if you do things concurrently in a single request with is one database session you run into really nasty problems so the point is I wouldn't do that and hopefully now you know why I can tell your colleagues right so how does the right way look like this is what we want to know of course and the right way is basically use spring that flx which is more or less this stack and we've seen that the complexity is just mindboggling but with cortines on top of them you get this so comp complexity will will drop from really nasty High to almost nothing of course you need to know things about cines but from a maintainability and a manage management manageability point of view it's really way way better so what does how how do you address this very simple you simply add these two dependencies on your class path cortin core cortin reactor and then what you actually get is this reactor uses these monos it's kind of a monad that models latency right it's like a completable future which either gives you back an exception or value eventually not right away and what you get with courin that these monos literally disappear they just they dissipate Into Thin Air and only thing you kind of keep is the T so you get the return tab is what you keep and that makes your code so much more maintainable but of course that's what you can judge for for yourself let's implement this in Spring boot so um what we don't want to do first is do some remote call and for that we would use an as synchronous web client an as synchronous rest client sorry in this particular case would be web client so here below I have my reactor example that's what you see here so this would be the service that calls the Avatar this would be my my rewrite with cortines and I also can show this is so simple it's really piece of cake you copy paste this put it here it's Java by the way so here it coddling we don't use pluses come on we can use of course interpolation and now just remove the mono remove the mono okay you might think okay come on guys you cannot just remove the mono there is some function to it well the N thing is with a very simple trick you can convert a mono to a suspend function so what you have to do is make a suspend function and you might think this is Magic but if you have more time it's also doing my courses it's really kind of it's very simple trick to convert every a building block so not only mon fut every ath block into cority it's it's really simple okay that's it um of course we want to make it a bit expression oriented remove all the M stuff we do the same for our next service that's enrollment service because it's so much fun right to to to improve that that's what we do here yes remove the mono we get a Boolean back we await the body which in this case is a string and then we still need to convert it to a booing so and of course it needs to be SP but intell just helps us with that and now our signature has been healed so to say from the nasty complexity that's it so next of course we want to do stuff with our database and here well um there are some issues gdbc is inherently blocking it's a blocking Drive so when you cdbc you get exactly the same problem before you get blocking code and when you block CES then we have you have a problem so you have to use a different Drive which is r2dbc r2dbc is um available for all database by now for Oracle it's only the latest version and because you have used different drive you cannot just use all the standard Frameworks out there so if you use a if you use GPA or hibernate you cannot use at least the blocking version of hibernate there's also a reactive version of hibate if you want to do that if you spring data I think it does a decent job it's probably not as features as GPA but in my experience and the application I've written it's good enough but that's something you have to be aware of so let's see how we can also Implement database logic here I have a reactor repository you know which uses this kind of syntax which then will be translation to a SQL query let's copy paste this to cot link also here byy by Mono make it suspend and that's it and now you see actually two benefits here first we got rid of the mono and second I introduce nullability whereas here in the mono case you actually don't know whether you can get user back or not you have to check with mono do is empty now signature is richer more type safe so two benefits again simpler and richer and when you look at the tests that's also really a wonderful thing so if I want a testy thing this is what you have to do in reactor using these step verifiers which really analy abstraction recently I used it in a slight different setup and I got double get records I really didn't know why so it it's something you have to learn you have to learn a new API to only test or or simply persist a user in database so let's see what you would have to write if you used um a COR routine repository so this would be our new new user then we say user repository do save new user and then we could say should not be null and it says hey I'm a suspend method but where is my suspend context well here it's fine to user and blocking because it's in a test we don't have concurrency in test so here for testing purposes PR blocking is where it belong for tests that's why it's also so nicely marked blocking and for our test to um be aligned with the rest that's what we want if you use all testing Frameworks like for instance Code test you don't even have to do that they already have they give you a cortin scope so it's even easier so here we kind of test our find username method find by username we say new user do username and then well we check for the username should not be n do username should be new user do username okay let's run this test and now also can debug decently right they can make a break B could step into here deburing is a nightmare and you see it's green yay okay so finally of course you want to implement our control and then see whether it performs the way it should perform right I mean this is the kind of the end goal what we what we want to achieve so and this is the reactor example we've seen before in our slides so let's kind of rewrite this using coroutines we have our Avatar service which now is a suspend method right that's fine then we have our so this would be the Avatar URL then we have our enrollment service which also also is now a Spam method I get the email pass my delay this would be is valid email and then I also do my check and that's also beautiful here uh now I just can throw exceptions right I don't have to wrap stuff I just would throw this uh the spring exception everything is fine and else I again take my ctin repository so user repository say do save take my user nice copy method available which would give me the Avatar URL and off you go and you see it complains because well can just call this spend function from nor method that's what I showed in the beginning that's not what you can do so how do I do that I simply make this suspend and this is the awesome thing about spring boot they implemented this or they they enabled endpoints um rest controllers to also be able to be have a suspend method so they ensure that from the start you get this court in scope in the well most efficient way you can think of that's one of the really beautiful things that spring boot has embraced cotlin and supports as a first class citizen I can tell you micronal does that too that works fine um um quarcus unfort Ely well does it but it doesn't support for instance transactional so these kind of things are not supported in quarks that's really p and spring boot has really full-fledged support by now it's quite recent a few months they have full-fledged support for every annotation you put in there so also the security anation all the stuff there simply supported okay now what we don't have yet of course is um our quartin scope for what we still want to do of course is do things in parallel so we have to put the assing stuff here then I need a cortin scope and what I can do is use this kind of Builder method cortine scope which gives me the scope which is anyway there because I have a suspend method then I say await I say um is valid a wait and that should be it so what is it complaining about because I still need to convert it into a dto and everything is fine so now for the final I think time is up anyway let's see how this performs on load and so on okay let's do a single request core routines that would be a single request let's do the the time the one home Cur time see it's about 2 seconds something so it does really do things in parallel with delay of 2 Mill 200 milliseconds and the ultimate test button coroutines so the two seconds so this was the reactor example you see the response times are well they're actually identical so if I do it again it will be fast or slow it's really identical to reactor it's not slower the overhead is there's actually no overhead 2 six sometimes yeah sometimes one is faster than the other let's see if we if we can get it uh by just hitting the read now know now it doesn't doesn't get faster but I said that that really depends on on the run so without additional overhead you should get the exact same form characteristics like with reactive programming but then at the cost of actually knowing bit of quines but this is a programming model that really scales nice we we have a question about r2dbc if it's actually a production ready um yeah do you know that it's it prod yeah I use it in production on a on the yeah it's production ready Che yeah I checked r2dbc is kind of it would it's not really like an implementation actually depends on the database you use behind that and uh I checked a couple of drivers and they're all above one but point0 version so they are kind of stable I would expect them to be production ready yeah I said Oracle was the last one to support it and in Oracle um I said it's a driver thing so they only officially deliver it with the latest version I think that's 21 but I think you also can use this for other version because the database that didn't adjust the database for that it's just a driver thing right the database just Remains the Same uh there is a slightly longer uh version of uh a question um regarding like uh here's like the context of uh of the question and uh the person is asking if there is actually a better way uh for for doing this by now like if there like because reactive hibernate is apparently is not uh really production ready so do you uh have thoughts on this issue yeah so yeah that's what also mentioned the beginning which yeah might be a bit disappointing in that sense I said um spring data is not as features as GPA so if you have these relations you're used to kind of to navigate them as you do in in a annotated GPA approach spring data won't give you that so the complexity you you get there is the price you basically play for for using the as inous approach and reactive hibernate to my knowledge I did I did a Meetup beginning of this year I saw that it's production ready even so it is not it's not a very convenient way of of using it can also be because they have this low-level building blocks they really heavily rely on these monos there are some converion method to cortines but it's not really an awesome experience so uh yeah I I have a like a just a foggy idea I would say I think this is the place where um some architectural decisions need to come into play because if you have an application that is written like uh over the years it has accumulated certain um let's say technical depth in terms of using this kind of uh technologies that are not really uh transferable to reactive programming then you start playing with the architecture you keep the heavyweight interaction with the database that is blocking in that application that should be using you know those a and uh scale uh out uh by using uh how you call it backend for front end approach uh having those reactive uh fast microservices running in front of that heavy weight uh application and then you might have multiple instances of that application running and then the reactive uh microservice in front of those applications could handle the quick uh work like uh quickly accessing the database using r2dbc for uh smaller amount of work and the heavyweight part of it uh could be offloaded to those real backend uh services or applications doing the interaction but I think it's not an easy task to break apart like uh an old application like this so it's it's still a lot of work right it's not something that comes just out of the box because I can apply a new technology or anything like that yeah yeah I agree with that that's the way you also use it often applications are more read heavy than write heavy so for wres we use even blocking models for that because it's fine enough we don't have to load there but for the read part we go reactive because they have much more load which needs to scale which needs to be highly available we also use the cess pattern for that so you have some sort of right part replicate your read part and yeah for us that that works very well exactly all right um so are there any more questions I think we still have quite a lot of people on the stream folks uh what do you think about the content is there anything missing that you wanted to know uh we briefly touched uh on uh Java Loom actually in the beginning of the session and there was a question what about virtual threads of course we expected that it just is not in the scope of this session today but we were talking with that we could run another session specifically on the virtual threads and core routines and how do they relate so folks don't be shy ask more questions and if not if not there will be there will be uh a section with the comments that you can uh leave your questions below we will check them later uh and try to address in the further further sessions okay it seems that folks are happy and uh our time is up today so uh thank you words for com yeah five o'clock time uh pencil pencel drop um let's go and have some tea excellent yeah yeah there so questions but yeah thanks a lot for giving me this opportunity and said follow up with virtual threat and cortines that would be awesome of course if the audience would consider this interesting yeah nice all right folks uh thanks everybody for coming uh stay tuned for the updates we're going to have more sessions soon and of course we we we we we going to wait for ORS to come to our streams again so goodbye everybody and have a nice by the thanks for joining and indeed
Info
Channel: Kotlin by JetBrains
Views: 14,041
Rating: undefined out of 5
Keywords: Kotlin, webinar, reactive programming, spring boot, coroutines, kotlin coroutines
Id: ahTXElHrV0c
Channel Id: undefined
Length: 54min 20sec (3260 seconds)
Published: Wed Nov 15 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.