Introduction to CompletableFuture in Java 8

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what is a completable future and what do we use it for completable future is used to perform possible earth synchronous computations and trigger its dependent computations which could also be a synchronous when I say a synchronous I mean non-blocking operations so in Java to perform a non-blocking operation has always been easy you simply create a new runnable you run it in a separate thread and once that runnable calls a run method and run method is completed that thread is destroyed as simple as that if the task that you want to run also is returning a value back you could use something called as callable so you again create a new thread complete the thread and return back the variable that you want in the main thread and we typically do this using executor service let's look at the some code for it so here we have an executor service which is of the type fix thread pool the thread count is 10 and we say sum but a new task this new task is nothing but an instance of callable which returns the type integer if you see the method it does nothing special it is just returning a random integer so we say submit this new task and give me the result but since it's a callable result and it can take a while you do not want your main thread to be blocking what the service does is it immediately gives you a placeholder value of the type future so it says that ok i will execute the task but it might take some time so i'm not going to give you the value immediately neither i'm going to stop you from running this main thread instead what i'll do is i'll just give you a placeholder keep that placeholder with you and do something else for a while once i am ready and completed the task i'll put the value in the future so after a while when you say future not yet if by this time the task is completed you will get the actual value in this variable called result if by any chance this task is not completed then the future dot get call will be a blocking that means the dis thread will wait at this point until this task gets completed and once that's done you'll get the result and you can do further operations using that result in this case we are just doing a system dot out dot println with that result to visualize this we have a thread pool on the right we have this main thread on the Left which is submitting the tasks so we are just saying serviced or submit a new task and it's immediately giving us back a placeholder if you see the placeholder it's nothing but an MD box which means it doesn't have its value so it's not yet executed the tasks it's just taken that task after a while once the result is ready thread pool will set the value of this placeholder which is of type future and that value will be whatever the result is so in this case the value is three but as we saw in the code earlier if the main thread is doing future get operation before that task is run your main thread which is calling the future docket can get lop and again after a while once the value of that future is set to three then your main threat becomes inaudible and it can be scheduled again by the JVM immediately so you see there is a problem here because your main thread is being blocked now this problem becomes even more cumbersome when you have multiple tasks to be submitted so let's say here we are submitting four tasks so in a four loop we have four one two four submit these four tasks and immediately the thread pool will give you four future values right these four future values if you do a for loop again and you say future dot get so in this case you're doing a future dot get on the first one right and since the thread pool cannot guarantee that the first task will be completed before the TAS second or TAS third or task 4 it is possible that your task 3 and your task 4 are already completed and as you can see those tasks have already the value so the placeholder is not blank anymore it already has the value of 7 and 5 and yet since we are doing a blocking operation future dot yet in the for loop which by its very nature for loop will tuck at the first element first if we will get a blocking on the main thread after a while once the thread pool calculates the value of the first task so let's say in this case the value is 1 only then your main thread will be able to run and then complete any dependent tasks so the problem was even though there were many tasks which were completed since we used a for loop to do a future dot get we were not able to do anything about these values 7 & 5 until the value of the first task was completed let's take a more practical example so let's say we have a certain set of code which is divided into multiple methods and each method is responsible for a particular task so let's say we have methods for fetching in order for enriching that order fills up with some more details for making the payments for that order for dispatching the order and for sending the confirmation email for that order so in essence for encoder flow we have five particular tasks fetch enrich payment dispatch an email if you won't convert this into code we would write something like this so you would again create an executor service which is of type fixed thread pool we are going to submit the task and the task number one is get order task this task which is of the collar bill nature will get the order which is to be processed in this case the service will not give us the value because the task is not yet executed it will instead give us as future when we perform the future dot get which is potentially a blocking operation since we are doing it immediately this thread will have to wait until that order is it really fetched once that order is fetched we again create a new task which is enrich tasks with that order we submit it again to this executor service which will again give under the future on which we can do future one not get of course potentially since we just submitted this task and we are trying to get a value of it this thread will again block if we can repeat the same thing for performing the payment for dispatching the order and for sending an email each of these tasks are potentially blocking because we are using futures for it and since it is blocking and since that whole thing is being done one after the other if we have a for loop over this we will not be able to do anything about the order to order three order for order five until we completely process the order one right that means our main thread this thread will not be able to scale much further it is almost like a sequential operation where it has to complete order one before getting an order to of course which is not scalable at all so there is a dependency of say enriched order on the fetch order because every charter cannot work before the fetch order is completed right so that is a certain dependency between these tasks for that particular order but a thread chaotically could be doing a payment operation for this order while there is another thread which is doing enrich order for another order free which is completely possible which is completely practical and which is what we really need to scale up our ordering processing right so what we want is we want say n number of threads so in this case we want ten threads all doing processing of one particular flow so this is what we mean by having independent flows within one flow the tasks are dependent on each other but one flow is not at all dependent on other flow so if we convert this into an algorithm what we really want is for n number of items so n number of orders we want to run the task once a particular task is completed we want to run its dependent task once that dependent task is completed we want to run its a dependent task and so on and so forth and we want this to be done completely independent for every item that we submit what we do not want is we do not even care about how the thread pool is implemented and how it is executing all these tasks internally and of course most important thing is we do not want the main thread to be blocked that is why we are saying run this task and run immediately run the dependent task but do not bother me so do not bother the main thread while you do it and this is exactly what computable future was designed for so if we were to convert our existing code which was our earlier code which we wrote in executed service using computable future you would write something like this so completable future provides a function called supplier sync which is a static function I think a short form for asynchronous there is Bri provide a task a method in this case it's a lambda but we provide a way for it to fetch an order so in this case we are get the order but do it uh synchronously once you receive the order give the output to the next chain method call so in this case the next chain method call is then apply and in this case the input is the variable called order we can name this as anything else so we can say the order can here replace it with the order so just a variable name you can name it anything we named it order so that it's consistent across so it's just a variable say whatever is the input which we caught from the previous operation apply the enrich method and pass that particular input to it of course the enrich method itself will give an output which is of course of type order which we again chain it to the next method and here also there is an input variable here we call it order we could have named it say just oh and we to perform payment here so if you focus on the right hand side we can clearly see there is a basic algorithm or a basic business logic that we can read through while looking at the code so we are getting the order we are enriching it you're performing the payment we are dispatching the order and we are sending the email computable future lets us do that and if you observed there is nothing where we specify the executor service there is no thread management here there is no future not get and that is why there is no blocking operation and we are doing this whole flow 400 operations so all those hundred orders will have this processing done asynchronously and each of the order can at any point in time can be in any of the different stages there is also an overloaded method called then apply a sink so the difference between apply a sink and on apply is the first thread which is going to do the sync operation this is first operation has to be a synchronous always after that the thread which did this forced a signal knocks of operation we can tell that same thread you do all the subsequent operations so in this case that this is the same thread which will do then applies hand edge then apply perform then apply dispatch and then apply email but for some reason if you do not want that particular thread to do it which is the same thread pull thread but we want some other thread to do it then we can say a sink and when you do a sink the first operation will be done by a particular thread and the second operation is done by some other thread so you can replace all of them with a sink but there is no practical advantage to doing that except when you want to specify your own thread pool so in essence you can also specify your own thread pool by passing it as a parameter here so if I do control P it is showing that yes that's an overloaded method first one which takes only the supplier the second one which takes supplier and an executor so suppose a mine to operations one is a cpu-bound and one is a IO bound so in an i/o bound operations IO bound meaning any file operations any database operations any HTTP calls I want to use an executor service say X to the service say CPU bound equal to X util dot new fixed output and it's a CPU bound I have only CPU sized CPU number course is 4 so I'll keep the value is 4 and I have an i/o bound executed service and in this case I can have say 200 off threads or you could instead have new cache third pull in which case you do not have to specify the number of threads so when you got getting the order which say it's an i/o bound operation so you can say no execute this not in your own thread pool but in the thread pool that I am supplying so this will be 2 so this operation will be performed in the i/o bound to service which we'd find here and let's see the enrichment operation if it's a cpu-bound operation it doesn't have to connect to any database or any external network then we can have it cpu-bound here now IntelliJ IDEA is giving us an error here because the thread which was going to do the first operation we are asking that same thread to apply the second operation but we are giving the two different executed service here of course one thread cannot belong to two different executor services and that is why here we have to say run it as an a synchronous operation and now that it is gone and that is why you have this overloaded method which is useful if you are going to run your method or your tasks and different thread pools right again perform payment is let's say it's all doing a sink and I want to do it in an i/o bound executor service so I'll say I about so based on whether your tasks are a mixture of i/o bound and CPU bound operations or for some other reason you want to execute it on different executor services these are Singh calls and help and just to clarify this whole thing whether you would say then apply a sink or then apply even then the whole thing is still being done off the main thread the main thread will never be blocked this whole for loop will run immediately 400 times and then the thread pools internally will take care of everything we skipped over one part which is let's say we do not use any computer service what happens then what is being used internally internally it uses a full join pool dot common pool let's check that out so if we go into the JDK code it's applying some stage into this icing pool and this icing pool is of the type folk join pooled or common pool okay this is the default pool that if you do not provide any then it will use this default pool to run all your tasks of course it's a business logic flow there could be exceptions in between there is also a way to handle your exceptions there and say let's say the perform payment field in case of exceptions in any of the operations about so in any of these three operations we have any exception or any problems this method which is like catch block will be called and in this cache block we can say whatever is the exception for now I'm not carrying what the exception is but I want to return a field order method or a field order object instance which will then be continued here so in the dispatch order we can write that okay if it's a failed order then please do not proceed with anything else but if it's a normal order then please proceed with the dispatch and send email will work as is if it's a failed order it will send an appropriate field email right so this is how you can handle exceptions in the completable future having said that if your exceptions or if your business logic is slightly more complicated then this might not be the right way to go more often than not when you have large projects your business logic is much more complicated and that is when completable future does not seem like the right option because there is a lot of constraints a lot of offence conditions that you want to have and computable future does provide you a few methods so there are methods which will say combine where you can combine two different computable futures and such but they're still a little complicated and in this case I highly suggest that you look at this framework called reactive framework which is rx Java which is very popular which is very stable and which is used by most of the large companies and that also performs the same thing as computable future but it's much more feature-rich and it's much more easier to read and write the Corl so that's it for this video guys thanks a lot if you have any questions or comments let me know and see you in the next one thank you
Info
Channel: Defog Tech
Views: 255,529
Rating: undefined out of 5
Keywords: Java 8, Java, CompletableFuture
Id: ImtZgX1nmr8
Channel Id: undefined
Length: 19min 33sec (1173 seconds)
Published: Fri Jan 26 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.