New Concurrency Utilities in Java 8 • Angelika Langer • GOTO 2014

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay welcome to this talk on new concurrency utilities in Java 8 this is a talk for Java programmers can I safely assume that you are familiar with concurrent programming in Java who is not ok that's good so what we will discuss is get an overview of the new concurrency utilities that were added to Java 8 we will learn about asynchronous result processing they have been adding a new facility or new class the completable future which is an alternative to the regular future that was added with Java 5 and it's it's kind of a different way of processing the results that are produced by concurrently running threads another abstraction that has been added in Java 8 is stem lock which is an alternative lock and in particular it's an alternative to the readwrite lock that has been available since Java 5 then there is another addition namely errors and accumulators they are similar to atomic long Atomics in general most of these additions to the concurrency utilities are ways for providing optimizations so there is barely anything radically new but most of these new abstractions support optimizations and I will go into some of the details there are a couple of new methods in concurrent hash map they have been added actually to the map interface so even the single threaded hash map entry Maps support new operations and they are particularly interesting for concurrent hash map because all these additional operations on a hash map are atomic if you use them on a concurrent hash map then there is a particular new mascot a new thread pool it's a single fork/join thread pool that is used in various contexts it's used by streams in Java it's used by completable future and also by some of the concurrent hash map operations that have added to the interface and I will talk about how this common pool so called common pool differs from regular fork/join pools and very low levels so we are working our way from top level abstractions into the deep into the internals of concurrent program they have been adding a an annotation contended which is in Java now in some miscellaneous unsafe it's very low level and it is supposed to support avoiding fault sharing okay so to add a couple of things to the introduction I do training for a living most of the time so I have a couple of seminars one of them is a seminar concurrent programming in Java and this is why I follow what is going on in Java and in the JDK okay let's start with completable future future in general is a mechanism for passing a result produced by a concurrently running task to an interested party another thread typically and it has been added initially even Java 5 and there is a future interface that describes what future is basically the central operation is get the result and future task is the implementing class that was added in Java 5 and they were mainly designed for use with thread tools let's take a look at the Java 5 way of using futures what you do in Java 5 is you create a thread pool then you specify a callable or runnable which I've been specifying here is a lambda expression then you submit the callable or runnable to the pool and the pool immediately wraps it into a future task and returns a future ok the task itself is still sitting in the task queue of that pool and you can use the future and a particular the futures gate function to wait for the result to wait for completion of the result and once you receive the the result from the gate function the gate function by the way is a blocking function if the task is still running and the result has not yet been produced so you wait until the result is available and then you can use it in my example I'm simply printing it what the limitation of this is that you have as a user of the future you have to actively decide when you want to retrieve the result and in particular you might have to wait until the result is available there is so far until Java 8 there was no way of specifying some kind of a student Chronos result processing so you act in turn in all cases you have to actively go and ask for the result are you familiar with lambdas I've been using the lambda without even asking who is not familiar with lambda expressions few clip lambda is basically our new language feature in Java 8 and the yellow part in my submit statement is a lambda lambda is basically an anonymous function and if you look at the syntax it has the arrow symbol in the middle and to the to the left you have an argument list it's like the argument list of a method and to the right you have the method implementation like the body of a method and it's some kind of anonymous method yet it doesn't have a name it just has an argument list and an implementation and that's it yeah and in my example I am providing a callable to the submit function of my pool so the lambda magically somehow implements the callable interface and basically it corresponds to what we've usually been doing with anonymous in our classes yeah so the lambda is some kind of concise notation for things that we've been solving previously with anonymous in our classes okay I've been using them well the new or many of the new API is in Java I use the new feature so I will throughout the talk use lambdas in various places ok so the limitation of the classic future is no support for for some kind of reactive result processing what we actually were missing is that we can specify some kind of core bag functions of of functionality that will be executed as soon as the result becomes available asynchronous result processing and this is what the completed a future provides completable future is a class in the java.util concurrent package and it combines the classic future interface ok so you can use a completable future like a future task and just use the blocking get function in order to retrieve the result but it also implements a new interface called completion stage and completion stage has a couple of dozens of functions many of them for asynchronous result retrieval like in the example here so let's assume that I've been receiving a completable future I will go into how you can retrieve completable futures in a minute and once I have a completable future I can use the gate function and wait for the result but alternatively I can use the then accept function and what I'm specifying there is a functionality that will be executed as soon as the result is produced and I don't have to wait then accept function just hands over functionality and then the thread that is producing the result is executing the task and producing the result will subsequently call my lambda that I've been providing to the then accept function as a as the argument and basically what I'm doing is was the result is available then please print it so I do not have to wait yeah I just make available some kind of functionality that will be executed exactly when the result becomes available and is executed asynchronously in a different thread so I don't have to wait I don't have to blindly recheck until the result becomes available I just specify ahead of time what will be done once the result becomes available okay I've been leaving open where I got that completable future from and completable futures are also created by passing tasks runnable Zoar kind of colobus to a pool but it is not done by using the pool specifically and calling a submitter execute function instead the completable future has static factory functions or factory methods to which you supply your tasks and then it returns a completable future there are two different flavors of were these static factory methods one is forerunner builds and these methods are called run async and run a student with an executor so you can specify in which pool the runnable will be executed if you do not specify a pool yourself then the fork joint common pool is used which I will go into in a minute later okay so you can pass over our enables what is returned is a completable future of void run herbals do not produce a result and so what you can specify a reaction to would be the completion of the runnable without receiving or processing any result if you have a result producing tasks which traditionally we have been expressing as a callable you would call supply authoring and pass in an implementation of the supplier interface the supplier interface is a new interfacial phase and it was added in conjunction with the streams that are also part of the java jdk and there is a new package called java you to a function and it has a couple of predefined functional interfaces that are typically used as the argument types of stream operations or like in this example it is used as the argument type of the supply async function and a supplier the supplier interface is a functional interface functional interfaces have one only one abstract method and the abstract method a supplier doesn't take anything as an argument but produces or supplies a result and in this case this is similar to a callable remember colobus colobus don't take arguments but they produce over a result the difference between a callable and a supplier is if you remember the callable it's core method is allowed to throw checked exceptions in particular and execution exception and all the api's that are using lambdas have difficulties with checked exceptions so nowadays all the lambdas are typically free of checked exceptions and so the supplier has a function that takes no argument produces a result but doesn't throw any checked exceptions so basically it's an exception free callable so yet these two flavors here of passing a task to either the Fortran common pool or a pool of your choice and then what is returned is the completable future which allows you to use the traditional synchronous gate function so the the classic future interface all the completion stage functions like then a zoom or then accept and there are several other functions that allow to specify reactions to the result retrieval okay so comparing the a the the options this is the old way of doing it yeah you actively use the future and decide now I want to wait for the result and then you get the result and processes in some way and the reactive way using completable future is I start a task or pass it to some pool and then immediately afterwards I receive the future and then I specify the reaction that is executed upon result the event that the result becomes available okay the completable future like well quite a number of new api's that were added to the JDK with Java eight uses the so called fluent programming style and fluent programming means that you have an API in which almost all operations return an object on which you can perform the next operation in a chain of operations so each operation returns something or which you can execute the next operation and I've been using an example here you can have short sequences of operations or some kind of lengthy up like the lower part here I'm supplying a callable to some kind of pool the caller calls a get stock in full method that somehow retrieves given a stock symbol it retrieves a string that contains information about the stock when complete so once that that string has been produced when complete I just print what has been retrieved as the result and when complete also returns a completable future on which I can call the next operation and then us then I say apply an extract increase rate method to the result of the get stock info stuff and then it extracts some kind of double the increased rate let's say once the rate increase rate is produced now that's also a completable future containing a double I call it an except function and then I format the result the increase rate and print it and afterwards I have some kind of termination method which is the then run and then run is one of the methods that do no longer return a completable future so that's kind of the end of my sequence yes the politest supply async run the task no supply Aysen just passes the task to a thread pool yeah it's like submit or execute with the traditional thread pools it just passed to the pool the pool puts it into into some kind of task queue and there it sits until a thread is available to execute it so this is not a blocking function yeah you just pass over the task and it returns the future immediately I could the task finish before I specify then except in theory yes so if the pool if a threat in the pool is available and execute that immediately and it takes only nanoseconds to proceed well then the result would be become available immediately and then immediately afterwards all the IDI reactions would be executed so the complete of a future handles this case in particular so it need not take time not necessarily okay but usually I mean you pass on tasks to a thread pool that typically takes some time yeah so it usually you are finished with providing all the reactions before the task even runs typically okay so what's the big deal with react with this kind of reactive style I mean it's it it provides higher scalability because with a traditional way of retrieving the result you typically you have to wait and you're wasting time just waiting for the result or you can wait with the time oh it out which is basically polling and then you tell you waste resources for for the blind rejects the many rejects until you actually receive the result and with the completable future you wouldn't you wouldn't waste any time you do not have to recheck you don't have to wait you just specify what kind of reaction is executed right at the moment when the result becomes available it's basically like providing a callback that is invoked upon the event of result completion okay the first time I looked at the interface I was overwhelmed because it has three or four dozen methods so there are lots of operations most of them are for so-called result users and there are methods that stem from the classic future interface they've been adding a couple of logical extensions to the classic future interface then we have the factories like supply Osric and run as link that I've been showing you then it has all the completion stage function for providing reaction and then it has bulk operations like do something after all these futures complete or any of these futures complete so it has a really fat interface I think okay a couple of more details there is not only then except function for specifying reactions but there are three different functions you after after you've been receiving a completable future you can say then apply a function then accept a consumer or then run a runnable the difference basically is let's say let's take then accept it takes a consumer and a consumer is something that takes the result somehow consumes it processes it and doesn't return anything so the result would be a completable future of void if you use then apply you have to provide a function and a function is something that takes the result and produces a new result or something else it kind of Maps the result to some kind of new object or information so it returns a completely it takes a T and returns a completable future for you so that's kind of a mapping function and then run takes a runnable and as you know run of us don't take anything don't return anything that's just like I want a wrecked upon completion without receiving the result without doing anything with it and I don't return anything so it returns the completable future of void so there are these three different flavors of providing a reaction and each of these three exists in three different flavors which makes for the fet api are they the combination is already in nine different functions the different three flavors here using the example of them run is then run the run above the reaction synchronously a synchronously or asynchronously with a specific pool of my choice and basically what the difference is if you say then run synchronously then the threat typically a pool threat that has been producing the result will also execute the runnable that have been provided and if you say then run Aysen another pool thread a new pool thread a different pool thread would execute the runnable so one thread would execute the task produced the initial result and then a runnable will be started in a different thread and you can even start the runnable in a different thread in a different pool if you want okay so you have nine different functions for providing various reactions in various flavors then there are more functions like for combinations yeah if you want to react upon the completion of two different futures there is a then combined function example here I supply a callable or kind of a callable that is called get a stock symbols daily lowest rate and another one for the highest rate so I have two futures for different executions of two different tasks and two different results and I'm going to do something on completion of both so I take one of the futures say then combined with the other future and a reaction that takes both results and produces something new in my example I just calculate the difference of daily low and daily high and then the result again is a completable future of double in my case probably and then I apply a formatting function that prints it so you can have combinations of futures and react upon the completion of several things yes yes you can combine more than two futures there is an all of function to which you supply a collection of suppliers or anibal's and then you would react on the completion of all of them and that's also on any of function okay this is kind of an overview of the dozens of functions yeah you have the factory methods supply Aysen where is a run a sink with a pool and with rock with outer pools or different flavors each of them then you have the then apply then accept then run things in three flavors yet another nine functions for specifying reactions then you have functions for specifying reaction upon failure yeah so when complete handle and exceptionally exceptionally is called if the task throws a runtime exception handler receives both the result if it was successful and the exception if it was well if it failed yeah then it can produce a reaction handle can produce a replacement result while when complete doesn't return a result it just takes the result in case of success the exception in case of failure and then produces some kind of reaction then there is a pipeline then compose interestingly this is basically the flat map some kind of flat map operation it will be looking into the stream interface there is a flat map function and this is basically what the the corresponding thing for the completable future and then there are combinations then combined and accept both run after both various flavors of combining two futures and then there is the all open any off for more than two futures so this is the completion stage reactive interface of completable future and then it supports the traditional future stuff like get and cancel and is done and they have kind of logical extensions of the future there is a get now that returns immediately if the result is available so far if you just wanted to paul you had to call gets with a time out a very short time out now you have a get now a function that returns immediately one and you have a join function which is basically a get function without a checked exception so it wraps the exception if the task failed into a runtime exception instead of a checked execution exception otherwise it's just like the get function cancel is funny ever get into this into a mini and into a minute it's just supported because the future interface always had a cancel function okay then there was another part of the api which i call the result provider API this is for people who want to return completable futures from their own functions so what we've been looking into so far is well somebody proved with the completable future and I want to use the completable future in order to specify all my reactions but I could also be a result provider myself so I'm providing an API which returns a completable future to something so I am providing completable futures to users of my API and then what I will be doing is I must provide a future which is initially empty and then I have to make provisions that the result is computed and that the future is completed as soon as the result becomes available what does that look like let's say I want to implement AE get web page function yeah it receives the URL and then it is supposed to provide the content of that web page as a string and since it takes time yeah to access the URL and read all the content and stuff it into a string I want to provide a completable future so that the user of my get webpage function doesn't have to wait yeah I could implement it and just have the user wait until the spring becomes available then my get webpage function would return a string but that would be a blocking function that takes a long time and if I want to make it more convenient for my users I say okay I provide you with a completion stage let's say this is basically a completable future to a string and then you can specify all your reactions without waiting for the completion of the result and inside that function I would have to create an empty yet incomplete future then I have to set up now somehow the result calculation which must be set up in a way that eventually it completes the future and then I'll return the future even before the result exists you know so that the user can already start specifying all the reactions in source code it looks like this this is my get web page function it returns a complete of a stage a template of a future or a completion stage of the string I create an empty completable future which is calling the constructor then I set up the result computation so I set up runnable that calls a blocking read page function yeah and that blocks that takes a long time once it is done yeah it returns with the content of the webpage as a spring I must make sure that the runnable as its last action completes the future there is a complete function future has a complete function for exactly this purpose and then it stuffs the result of blocking read into the future and if it fails there is a complete exceptionally where you can pass in the exception and then you have to make provisions that this task this runnable is executed so I pass it to some kind of pool and then I return the empty future so far I've just been setting up the runnable then passing it to a fool task is probably not yet running and I return the empty future to my user and the user can take the future and stuff in or specify all the reactions to the completion of this future ok so this is if you want to design interfaces that return completable futures in that sense you would be a result producer instead of a result user ok there are these two functions complete and complete exceptionally for completing a yet empty future ok so if you compare future tasks the Java 5 way of implementing futures and completable future there's one key difference future task is actually a combination of future and a task it knows the task that it did knows the task which produces the result whereas the completable future decouples the the hand over mechanism of of what is the the future characteristics and the task execution completable future doesn't know anything of the task that executes and produces the result which makes for funny results because I mean with future tasks the cancel function made sense because you could cancel the task while it was running completable future has a cancer function and you can call it but the result is just that the the future would return exceptionally and the task is still running and producing the result so since it doesn't know the task cancer is kind of debatable yeah so I would use it usually if you use the reactive style and aid affluent programming style it would never occur to you you to call cancel in the first place interrupts your running you mean if I can't cancel on a complete of a future does it interrupt the task no it does because it doesn't know the task the task is just alive and running and keeps on doing until it finishes by itself Oh nobody's interested in the result any longer it's a bad idea to call cancel I would say yeah you can certainly derive from the completable future tasks overwrite cancel with whatever semantics deems reasonable to you that is certainly doable yeah but the default implementation just finishes or completes the future exceptionally yeah and that's it okay good any further questions regarding completable future otherwise I'm moving on to the next abstraction the next one is temp lock stem lock as I said is an alternative to the re-entrant readwrite lock the idea of the classic Java 5 reentrant readwrite lock is it wants to allow concurrent execution of readers and only wants to synchronize the combination of reading threads and writing threads or writing threads with other writers yeah if you have several read excesses conquer read accesses to the data structures that doesn't need synchronization but if you use the irregular re-entrant lock or the synchronous blocks you would also block or readers would be blocking other readers and in order to avoid this non necessary synchronization of one reader with another reader re-entrant read/write lock just has a mode in which several readers can access the data writers are blocked yeah and otherwise it blocks the combination of two writers and readers and writers but readers alone can run in parallel and the stamp lock is some kind of an extension to it or it's very similar the stamp lock internally consists of a version number the stamp that that provides this was part of the name and the internal node whether the stamp lock is in read write mode write or an optimistic mode an optimistic read mode this is what makes the difference between the re-entry read read lock which is always a pessimistic law pessimistic means and I want to acquire the lock and I want to be sure that there that I have forces if I acquire the right lock I want to make sure that I have exclusive access to the data no other writer no other I can run this is what I expect of a pessimist o'clock similarly if I acquire a pessimistic read lock that I want to make sure that only readers are running and no writer is running and what the stent lock provides is an optimistic read mode and this allows for optimizations in low content local tension situations where there are only very few threats and there's hardly any concurrent access to the data and then I want to try to read the data without actually acquiring the lock I will just I just hope for the best yeah in an optimistic mode I read the data grab it but I cannot be sure that I really had exclusive access to it I have to validate afterwards and make sure that there was no concurrent access that the opt my optimistic approach was justified and this optimistic approach has exhibits better performance unless it fails yeah let's see so internally it maintains a stamp each time you acquire a lock a stamp is is presented to you and you need to keep that stem until you unlock or release the lock and if the stamp is 0 it means you didn't receive the lock so the position failed ok then it has the right mode which is a pessimistic write mode like with the re-entrant lock it makes sure that the the threat that acquires the stamp lock in right mode is the only one the only threat when it has exclusive access to the data ok in that mode if there's one thread in right mode then no readers can run every attempt to acquire an optimistic read lock will fail yeah it really guarantees exclusive access to the data then we have the read mode that's the same read mode with with like that we have with the re-entrant readwrite lock it allows non-exclusive access because you can have Tara Lily running right readers the writers are blocked but readers can run in parallel so you don't have exclusive access because you are just reading the data and other threads are also allowed to read the data concurrently this is exactly like with reentry we tried Locke and then there is this optimistic read mood and the optimistic read what is an extremely weak version of a read look I'm just saying okay I will try an optimistic read and afterwards I will simply read yeah no matter whether there is concurrent access or not and after reading I will ask what their concurrent exists in between the time when I announced I will read and until I asked whether it was valid and the optimistic read mode can be broken at any time so if there is one thread saying okay I'm optimistically reading other readers can join which is not a problem but also other writers can join and modify the data that the optimistically reading thread is currently reading and in that case you're probably reading inconsistent data half modified half not yeah so you must validate afterwards let's take a look at an example okay this example is classic re-entrant read right yeah pessimistic locking you have data you want to synchronize it by using a readwrite lock and the idea is you use the right lock part for modifying functions and the read lock part for all read-only functions that's the classic way of doing it pessimistic read and write lock you can do the same with the stamp lock looks slightly different yeah you create a stamp flog each time you acquire a lock yeah you would acquire a write lock and a read lock you receive a stamp and you need to use the stamp and pass it to the corresponding unlock function otherwise it's exactly the same as reentrant readwrite lock the new stuff comes into place when I want to try an optimistic read not optimistic read is for prominently for short read sequences of read access to the data that should be very short code segments because the longer it takes to read the data the more likely it is that there is concurrent write access in between and that you will fail eventually okay so this is what it looks like my data consists of two integers primitive type integers part 1 and part 2 I have a stem clock and now I want to read optimistically so I say try optimistic read their bar I'm I'm announcing okay I will try an optimistic read and I will later call a validate function and then please tell me whether there was an conquer and right exists in between so I read the two integers which is a very short code segment of reading and then I validate when I asked was there while I was reading was there some kind of concurrent write access if so the data can be inconsistent now you might have been reading one integer before the modification of the other one afterwards and every inconsistent data which is useless so the validation can fail if it succeeds yeah then you can just use your local integers and calculate them and take take your time because now you're working on local copies of the actually interesting data if it fails yeah they could have been concurrent right XS in between and validate would tell you false it failed there was concurrent exists then you can yeah you need to produce some kind of reaction one thing you can do is a retry you could spin on a loop and retry until that you eventually succeed or you can fall back too pessimistic locked in which is what I'm doing here as soon as the validation fails that which is basically the message okay there are concurrent writers your data is inconsistent i acquire the pessimistic read lock and acquiring the pessimistic read lock means I am waiting until all writers are gone yeah and then I get well the guarantee that only readers will be active while I am reading Part one and Part two of the interesting data and afterwards you have to unlock the read lock and then I can again calculate whatever I want to calculate based on the data yeah five minutes left mmm okay that much about stamp lock it also has optimistic upgrade techniques if you want to upgrade from a read lock to to a write lock that's also a difference to the regular reentrant readwrite worker accumulators accumulators are not not extremely interesting it's basically an alternative to the atomic lock the tommix you know Atomics are used for or block free programming you wouldn't use any kind of locks in order to get exclusive access to a long value for instance but instead you would be using an atomic lock and using atomic always means you are reading the current value with just one atomic operation then you calculate a new value based on the previous value and while you're calculating that could be again concurrent right access to the data that you've been reading and in order to doing some kind similar to the validate that we just had you will be doing a comparison operation and compare swap means okay if there is still the previous old value then please override it with a new value and if the old value is there any more than there was concurrent right excess then it fails and tells you yeah and then typically with an atomic long the reaction to failure is retry retry spin in a loop until you succeed atomic Long's our thread safe and used for lock free programming lock free access to a long for instance there's one problem with the atomic long let's say there is room for optimizations which the long era provides if you have very many threads accessing the atomic long then they will fail very often and they will spin quite a while in a loop until they succeed and in order to avoid these retry latency they provided the long error and the long error doesn't maintain a single loan which is concurrently exist but it has several atomic cells so it splits up the actual content the actual long value into various Long's so if one thread wants to add something to the atomic long or the long error and one of the cells is currently under a concurrent excess yet just one of the uncontacted cells is updated so that basically the the the entire result isn't produced until you actually asked for the content of the long error so basically it tries to reduce contention and failure access failure by splitting it up into different cells so that one fetchers adds the value to an uncontained it sell and only if you retrieve the result the actual long value is calculated so you have aired increment and decrement functions that don't return anything which is different from atomic long atomic long has a get and increment function it returns the value well this one doesn't return the value until you ask give me the sum yeah and then it adds up over all the cells this is the long adder the double Edda is the same thing and the accumulators are just a generalization you can provide an accumulation function they need not be calculating the sum yeah you could calculate the product or whatever else whatever different kind of accumulation you want to do ok concurrent hash map has a lot of interesting extensions you can look into the concur henchman has been reorganized internally the buckets are no longer lists but they are trees if the key type is comparable which is basically an optimization for key types that have poor hash functions so that's an internal thing and then it has additional functions and this additional function exists in the regular map interface so for instance you have a compute function and the compute function will calculate the key and the value you provide a function that can calculate the value for a given key so you do not provide the key and the value to provide the key and a function that calculates the value there are computed epsilon compute if present functions and there is an example here where I'm using compute if absent given a list of strings I want to produce a hash map a concurrent hash map that associates to eat each string the frequency in which it appears in my list of Springs ok so I create a concurrent hash map of string and long adder the counter becomes the frequency counter and then I loop over all the strings in my list of strings and then I compute if absent yeah so if I found a string yeah if that string has not yet been added as a key in din to the map I would calculate the first value the initial value which is a new long adder long adder so far is initialized with zero value and the new value is returned from computer absent so I I do sif the word appears for the first time I produce a key value pair with the string and a new long adder that contains 0 the long adder is returned and I increment it the next time the word appears it's no longer absent yeah and then just the Associated value is returned and I increment it this way the result is a list of words and a word counter and all these up operations are atomic with a concurrent hash map guarantee to be atomic the side or let's say one of the requirements is that you're remapping functions do not take a long time yeah otherwise they would be blocking access to certain segments of the hash map and there are small of this emerge function yeah they are for each search of reduced functions in four flavors so twelve additional new methods all atomic methods in a concurrent hash map okay let's skip a couple of details the common pool I already mentioned there is a common pool that is used in all cases we're doing it where you do not provide an explicit pool like for if you if you supply a runnable to the completable future faces or the streams they also use the common pool and that's a single fork joint pool one single fork joint pool is static fork/join pool that is pre-configured yeah there is only one for the stream operations for all the cases in which you need a pool but do not specify one yourself it differs from a regular fork/join pool by lazy initialization so it starts with no thread and then builds up a number of threads and reduces the number of threads as they are no longer needed it has a default pre-configured size which is available process as minus one there are sister properties you can change that value so you can change the size of the pool and the shutdown behavior is different it's basically a pool of demon threads so as soon as the last user thread terminates all the pool threads are automatically terminated which can produce certain problems if you've been passing tasks to let's say completable future or stream operation here and you would actually still be needing the common pool but the last user thread exits when the pool is shut down automatically and your tasks will no longer be executed so that requires some kind of measurements you have to wait until the pool has been executed all the tasks that you pass to it before actually the last user thread executes there isn't a weight question function for doing exactly this it waits for an empty pool notice all threats are idle nothing to do yeah then you can await this particular state of the pool and then terminate twist a lifetime for contended like two minutes two minutes okay two minutes for contended content is very very low-level it actually addresses false sharing and false sharing means I mean you know the the you have the architecture of multi-core CPU they use they heavily use caches cause caches that are linked to a certain core and you often you have hierarchical caches and the idea is if an object I mean if if objects or data is placed into the cache and one thread modifies the cash and other threads want to see the modification there must be flashing flushes and refreshes into main memory this is what the memory model of Java is about and our memory model rules for synchronization and for volatile and then for for final fields also have rules which is fine but you can run into the following problem typically if you have a class like my point it has two integer fields what the compiler tries to do is to pack all the contents of an object densely together yeah in order to save space to save memory so the entire object with both integers would end up in the same cache line of a given call okay if one of the two fields is a hot field let's say it's a volatile field and it needs it has concurrent access and it needs many flushes and refreshes then typically the entire cache line becomes hot if if one bit or one integer inside that cache line needs flushes and refreshes what the hardware does if flushes and refreshes the entire the entire cache line each time it is accessed even if it is accessed on a cold field yeah the cold feet doesn't even need the other flushes and refreshes but the entire cache line yeah because there is a volatile or not filled in it the entire cache line is updated and flushed and refresh all of the time so if one of the fields is a hot one Ivalo to her one yeah it triggers flushing and refreshing each time anything in that cache line is exists and that is a performance impediment yeah but the caching is supposed to boost the performance are you doing quite the opposite if you have coal fields and hot fields in the same cache line and what the contended annotation is trying to do it tries to separate code fields from hot fields yes so you can mark the hot stuff with the contended annotation and what the JVM does is it inserts padding bytes in order to make sure that the coal fields are on one cache line if they are accessed no flushing and refreshing is necessary and the hot field sits in another cache line and each time it is accessed then all the necessary flushing and refreshing is done but it is done only then only if I access the hot field and content it comes in different flavors for single feed for groups for entire objects all fields of an object it's very very low-level if you don't know what you do with it then probably you don't need it yeah it's used inside the JDK implementations for instance if you look into the fork joint pool in a full joint pool every thread it's own work you and the work you should be thread specific yeah and it is a contended class yeah so that you don't have faults sharing because there are several work used in the same cache line and you would create flushing and refreshing no matter which thread excesses which queue okay that's an example okay any questions so first of all thank you we already have some questions on line since he already took some from the audience during the talk I would like to take this one so one question is is now in the example with the new lock to the fields should be volatile or should the field before it a in the example with the new lock should the fields be volatile that's probably addressing the temp lock in the example with stamp lock now if you use the stamp of the fields need not be volatile because the lock it's a lock yeah every lock makes sure that there are flushes and refreshes that a guarantee visibility of the modifications that are done under the protection of the lock so they need not be volatile okay we have another question with which formerly is a question but I don't think you can answer it it's more like directly to Oracle so maybe one more question for the audience that's one I have learned that using classes from the Sun package is a big no-go how official is this annotation it is official it's an official part of the Oracle JDK yeah and if you need it if you build low level abstractions people are using it but it's not for everyday programming so it is official but most likely you will not need it in most cases so it's basically part or it's provided for the library implementers or other people who build libraries or frameworks of their room so it's official but yes still you shouldn't be using it ok ok thank you very much ok in lots of interesting insights into new stuff in Java 8 which was kind of drowned in the noise generated by lambdas hope to see you in a few minutes for last talk for today you
Info
Channel: GOTO Conferences
Views: 33,654
Rating: 4.9299998 out of 5
Keywords: Angelika Langer, GOTO, GOTOcon, GOTO Conference, GOTO Berlin, GOTOber, Concurrency, Java (Programming Language), Java 8, Concurrency Utilities, CompletableFuture, StampedLock, ReadWriteLock, LongAdder, LongAccumulator, AtomicLong, ConcurrentHashMap, Programming Language (Software Genre), Videos for Developers, Women who Code
Id: Q_0_1mKTlnY
Channel Id: undefined
Length: 51min 57sec (3117 seconds)
Published: Wed Jun 10 2015
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.