C++ Multi Threading Part 2: Mutex And Conditional Variables

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] hello guys welcome to this video today I will talk about direct synchronization and mutexes and conditional variables we saw previously that clicking thread it's very easy and straightforward but when it comes to adding synchronization and using new Texas critical sections things get very complicated quickly so I'm sorry this video is a little bit longer I try to explain and go a little bit more compare in this video I don't do that many live coding because I want to make the video shorter but I can add the link to my github so you guys can go on to a load and work on your own time I'm very excited to create this video let's get started what do we cover to them first we will talk about c++ mutexes and how we can use them to create mutual exclusion then we will talk about locks on mutexes different versions of locks and then finally we will talk about communication between threads namely using shared memory or conditional variables now why do we talk about mutex why do we need mutex and the main reason one of the main reasons that we go after mutex is called race condition let's talk about it so race condition happens when two threads two or more threads are competing against accessing a shared resource for example here two thread thread 1 & 3 tool are trying to write inside this shirt variable G of X one of them wants to write value 1 the other wants to write value of two so we say T 1 and T 2 are gracing to write inside G of X once this happens once these two get executed at the end we don't really know what would be the value of G of X because of this race conditions sometimes thread 1 might win sometimes right 2 therefore this is called undefined behavior basically race condition will result to something that's called undefined behavior because we don't really know what the behavior would be it might depend on the implementer now don't take my word for this let's run this and see what happens I will put the github link in the description if you guys want to run this yourself so I'm running this a thousand times and these two threads are racing a thousand times to either write one or two so you can see most of the times we are able to run value of tools so track two means most of the times but there are some times that also thread one wins so there are some ones here there's one here so that means this is something that is not deterministic so we don't really know what's happening here let me show you another example so imagine I have a very simple function called incrementer which is trying to increment one variable starting from zero and it does the implementation a hundred times now what would happen if I put this inside a thread a hundred times and then run all these threads together so we have a counter inside 100 threads each thread tries to increment it 100 times one expectation would be to expect G counter at the end to be 10,000 because 100 multiplied by one hundred could be 10,000 let's execute this really quick and see if this would be the case so the file that you want to look at is under source name no LOC men and let's run this so again I'm using big 0 to run this make sure you enable C++ 17 and then let's run this so again I'm running this thousand times so 4,000 times I create these threads a hundred threads of this incrementer and I want to know what would happen for each time that these trades are running together and as you can see most other times we get 10,000 but there are a lot of times also that we don't get exactly 10,000 we get 99 29 9808 99 36 and so forth so this is this is very undetermined istic hopefully by now I've been able to convince you that expectation is very different than reality when it comes to multi-threading and hence welcome to multi-threading what does go wrong each thread tries to increment G counter while it's running in parallel with others so first of all there's a race between the two threads for incrementing both of them want to increment G counter a shared resource at the same time but it's really worse than this because even implementation is not an atomic operation it requires reading incrementing and then writing back the result now all of these operations between thread 1 & 3 - in reality they might get interleaved in an undetermined estate way what do I mean by this let's look at one example of interleaving so for example thread one might read value of 0 at the same time threat to might read and then it also reads 0 then thread 1 increments and then writes backed value of 1 in the meantime thread - does the same thing which is increment and writing the same value 1 although right after a 2 happened after rights of thread 1 because we both they both read at the same time sort of at the same time or they should say they both read the same value the result is not what we expected we expected one thread to increment and then the other thread to pick this result and an increment so we should have ended up with a value of 2 but in reality we end up the value of 1 the reality is even worse because even reading or writing variables inside memory is not an atomic action by that I mean reading G counter from memory might get split into two separate operations so thread one tries to read the first half before it has the chance to read the second half thread 2 comes in reads increments and writes and then only after that does thread 1 have a chance to the second half and you can see this would be a completely junk value because it ripped the first half from the first value of G counter and then the second half from some modified value by 3/2 as if this was not enough there is still other problems so you write some code and you think that would be executed but that is not the case your machine tries to fix your code so basically you should always realize or remember that what you write is not what gets executed what do I mean by this you write some code then compiler comes in tries to optimize your code with different techniques like reordering loop unrolling and other techniques then CPU takes this result and then does other optimizations for example out of order execution branch prediction and then there's cache cache does prefetch and buffering sometimes you write you think you're writing in the memory but it's not happening you actually only write inside the cache only after cache maybe several levels of cache do you write inside main memory so you should always remember compiler CPU cache maybe other things are telling you that you don't know really what you're talking about or what you're writing or hey let me fix or improve your code for you what you're writing is not what you want let me fix it for you looking more accurately for what happens with cache one expectation is that if you have multi-threading if you have multiple threads they all read and write inside memory but this is not the case each thread may read oh right inside cache and then there's multiple levels of cache and then only then do they have a chance to write inside memory so one change from this thread it thinks it's writing in a memory might only happen in this cache and then the other thread may not even see it let's summarize all the problems that we talked about so far grace condition creates undefined behavior and that mostly happens because two or more threads try to access a shared resource like memory and then they use operations that are not atomic is increment lead increment and right was an atomic action they didn't have this problem that which is so also operations get interleaved we saw that one read operation from one thread got split and then during that time during read part 1 and part 2 memory got corrupted and then lastly we saw that due to optimizations the actual executed code might be completely different now this last problem is something that you're not used to it and that's because in single threaded programming for the most part all of these optimizations from compiler from CPU from cache they're not observable but when it comes to multi-threading optimizations become observable again welcome to multi-threading remember that the main problem happened because we had shared data all of these problems raised and not being atomic operations they all happen because you're trying to access shared data from multiple threads this is the mother of all evils which causes race conditions and wrong or their operations I talked about all of these problems to get to the solution what is the solution let's get to it I can think of multiple solutions and I can group them into several categories the part that we are focusing today is mutexes and locks there's also STD atomic in C++ which I would love to talk about in a separate video the third solution is something that I called abstraction which are techniques like CSP communicating sequential processes the actors model and MapReduce and I would love to talk about at least CSP and MapReduce in separate videos hopefully but today our focus would be on mutex and locks mutex and critical sections meet X stands for mutual exclusion remember that problem that two threads are accessing a shared resource like memory so the idea with mutex is to protect this shared memory with something that we call critical section and only one of the three can enter this section at all time so eager threat one has access to shared memory and threat to has to wait or threat to accesses the shared memory Android one has to break but not both so the increment now would be read increment and write so either threat one can do all of these operations in series without being worried that threat two accesses this memory or thread two can do this without being worried Astrid one accesses this memory so either thread one accesses followed by threat to accessing or threat to accesses first and then thread one has a chance to do its work but not at the same time so the operations on this critical section on this shared memory cannot overlap between thread one and try to remember the problem that one thread was reading gee counter in two halves so when it was done with the first half threat - with chicken and corrupt the memory and then once it gets back to read the second half it would read something invalid it was not a continuation of the first half so this problem can be solved with mutex now thread one can enter the critical section perform its first half has peace of mind that when it comes back and do the threat that the second read nobody comes and corrupt the memory threat to has to wait only when thread one is done with the critical section contra to start and - its job or thread - can start first only when it's done can't read one start and again we have peace of mind that the entire operation can happen atomically without any other thread accessing this part of the memory what do you need to know about mutex it implements the either me or you policy or mutual exclusion makes operations atomic and then if one thread is in the critical section the other has to break so either thread one waits for try to or thread - waits for thread one this blocking feature that one thread has to wait for the other thread is a side effect of mutual exclusion which we will get back to it later in this lecture all right it not be theory let's do some coded step one define a shared variable of type STD mutex this creates the mutex and has to be shared among all the threats that are using a critical section now once you have this mutex you can call lock to start your critical section which in this case is incrementing and then call unlock to end the critical section so the critical section is within lock and unlock and that's it that's as simple as creating a shared variable and then do like an unlock on it so again let's see this in action the file I'm running now is under source main mutex and it's called lock unlock Meg so we define the mutex as a shared variable and then there's our counter also a shared variable and used lock and unlock to create the critical section for G counter each thread is incrementing a hundred times and then we have a hundred threads later in the main function we create a hundred threads so we expect to see 100 multiplied while 110,000 increments in this experiment we run all these 100 threads a thousand times and we measure how many times we got the 10,000 which we expected so now I run this and you can see every single time I got ten thousand there's no other result except that we are sure about this using this asset expression all right congratulations mutex works what do we need to know about a city mutex you need one mutex pair critical section and you need to define it as a shared variable among all the threads that are using it what do we need to know about locked and unlock lock and unlock should match for example here if I have one lock I cannot call another like after this it has to be one dot followed by unlock and then again for a boy lock if you want to you cannot have two back-to-back unlocks or back-to-back locks there are two minor problems with lock and unlock on a mutex which is we forget to unlock this lock on the mutex will remain and we might get deadlock the other problem is that is for any reason in this critical section an exception happens we will jump outside of this for loop and we never have the chance to unlock so that mutex again will remain locked forever and you never want to leave a mutex lock forever because other threats will not have a chance to get into the critical section so two problems do not forget to unlock and you need to be aware of exceptions here in order to simplify this c++ provides something called lock right which is very similar to mutex lock and unlock so again you create a mutex as a shared variable and you pass it to this variables constructor which is of type Locker now as soon as this variable gets constructed automatically the mutex get blocked the critical section happens as soon as this constructor is called and then you can do whatever you want inside the critical section and then once this variable is out of scope basically destructed unlock is called automatically so this way you don't have to call lock and unlock yourself it's done automatically so if you forget to unlock your safe and then if exception happens it can be jump outside but outside of this for loop this court doesn't have scoped its destructor is called and then the mutex is not locked anymore this pattern is called ra íí- or resource acquisition is initialization basically as soon as we construct B get locked on the mutex so it's preferred to use lock card instead of directly calling lock and unlock on a mutex up next unique lock unique lock is basically lock card plus an option to lock and unlock so again you have a shared mutex you create a variable now instead of la Carque we have unique card again as soon as this is created in the constructor the mutex locks automatically if we jump outside of this for loop in a destructor the mutex get unlocked automatically but compared to Lockhart what we had before you can also optionally do lock and unlock if you want so this gives you similar functionality plus there are aii of naka very simple not that special up next there's shared lock basically shared lock is again a critical section but you can have multiple readers what do we mean by that first your mutex now has to be of type shared mutex rather than mutex that we saw before if you're writing into something you use unique lock which we just saw which uses shared mutex now but if you're just leading the variable and you're not modifying it you use shared lock so unique lock for all the writers and shared lock for all their leaders this will still create critical sections for both writers and readers but what happens is that only a single thread can be in this critical section on the unique lock but multiple threads which are only reading now can be in this critical section so if there's one thread in this section nobody else including all the readers can be here so we have only one writer but if you have multiple readers and there's no writer one or more threads can read at the same time up next multiple locks in some applications you need to lock on multiple mutexes for example here in this function I'm using la carte which we just saw to lock on mutex one and then after that I lock on mutex student so I create to lock our valuables now this is bad because there might be another thread which does something similar but in reverse order so it locks on mutex to first and then locks are mutex one this is bad because what if thread one locks on this mutex at the same time thread two locks on mutex two so again we have a race condition now thread one is locked on mutex one waiting for mutex to thread 2 is locked on mutex two waiting for thread 1 this is called deadlock because thread 1 is waiting for to threat to is rating for trade one so this is bad please do not write your code like this in order to address this there's something called STD lock function which takes the mutex is that you want to lock on them and it has this feature called all or nothing basically this function tries to lock both of them both of the mutexes at the same time and it gets blocked otherwise so there would be no case that it locks one mutex but not the other it has to lock either both or none of them so this situation of deadlock that I saw here would not happen now once you do this your critical section really starts right after this function now your two moontak cells are locked after this they still need to unlock these mutexes that's why we create this lock guard variables lock one and not two but we give it this parameter which basically tells the constructor that you don't need to lock the mutexes automatically as you did before so just create these lock variables without locking them because they're already locked here why do we do this because at some point we need to unlock these mutexes using the destructor or ra íí- pattern once we come out of this loop these two mutexes we get unlocked so this method is better than what we saw before but if you're scratching your head and thinking this is not a clean same text you're not alone C++ provides a really clean syntax for multiple locks so forget what I said here you really want to use something that's called scoped lock so you create a variable of type scotlock and you lock the mutex cells so this has that this is using the ra íí- pattern inside the constructor it tries to do this all or nothing lock on the two mutexes and then once it goes out of scope it does the unlock on them automatically so this is the recommended way of creating multiple locks alright i introduced a lot of different ways of locking mutexes let's compare them together so the first thing we saw was a direct lock and unlock on city mutex it was fine in for simple but you need to remember to unlock and remember that in case of exception mutex remains like if you address these two problems you're fine go ahead and use this next we saw lock card which has the locking with our Aoi that means you don't need to directly call lock and unlock as soon as you declare this you lock and then when it goes out of scope you get unlocked then we saw unique clock which was basically lock card but you could optionally do lock and unlock following declaration of this variable so it gets locked as soon as you declare it but you can still optionally do lock and unlock after that we saw shared lock which is basically unique lock but you can have multiple readers read at the same time so you don't lock them unnecessarily finally we saw scope lock which was similar to Lockhart it has ra íí- and it's used for multiple likes to avoid deadlock you probably have noticed that all of these are basically the same thing you always need to create a shared variable of type mutex or a shared mutex and then lock or unlock to create your critical section and these are nothing but convenient functions and wrappers around this basic concept of next conditional variables the main reason that we are going after conditional variables is that one thread wants to send a message to another thread let's see an example for example you have thread one that produces some data and then thread two wants to consume the result of this data production so that means naturally thread two should wait for data to become available so what happens is that at some point threaten one produces data and then it should somehow in a message or notification or somehow tell thread to that data is now ready at this point thread two can stop waiting because data is now ready sample data and consume it the very once so conditional variable is all about sending a message now this producer consumer is nothing new we have seen this pattern in a lot of places for example all of you guys when you use the internet or laptop or a tablet or phone sends a request to a server the server produces data and then at some point it sends it back to us so that we can consume it a similar thing happens in pipeline CPUs one stage uses the data from the previous stage same thing in DMA controllers the same pattern is very common in multi-threading often one thread produces some data one thread consumes it and there needs to be some sort of synchronization between them how do we do this so one common way one easy way is to have thread 1 the producer access some shared memory so it produces data puts it inside this shared memory G data and then there's a flag here that once data is ready it sets the flag thread 2 because this is shared memory can monitor this valuable this flag when G ready becomes true it samples data and consumes it and moves on to this life so basically if thread two's job is monitoring this and sampling this and now you guessed it right because this is a shared memory we have to create a critical section so producer and consumer both access this shared memory there is a race condition and critical section is important now how do we implement this in C++ so this is our producer function you can see I used a unique clock to create a critical section here the produced data and here we set the flag to be true on the consumer side again we create a critical section we monitor this ready and then once this read it becomes true this while loop unlocks and then we can move on to their life and use the data the value 1 but notice that while we are waiting we have to keep unlock and lock the mutex because what happens if the consumer executes space it samples the flag it has to unlock the meter so that the producer can go ahead if it wants to and produce data but if the producer is slow the consumer can lock again and sample data again and the if it wasn't ready then unlock and then gives the control back to the producer so you can see we have this busy waiting lock here going on which is not efficient so this is not considered as a good way of programming because it you keep the CPU busy here and you keep locking and unlocking these mutex now another way is to add some delay here so put this consumer to sleep for some arbitrary number of milliseconds this is a little bit better probably you burn less CPU cycles but we still don't know what exactly should be this number and we are still doing unlock and lock a lot of times unnecessarily so this is still a bad way of multi-threading programming in previous model we had a shared memory the G data for your data and G ready for a flag that basically indicates if data is ready or not and then we put it inside a critical section with conditioner variables we still have these two but we also add a conditional variable called G underscore CV so we still have generated we still have G data and we still put everything in a shared memory and you guessed it right this has to be inside a critical section now what happens is that thread one produces data sets this flag to true but it also sends a notification through this conditional variable thread to rather than monitoring G ready waits for a notification to arrive on G underscore CV so it receives a notification so you can consider conditional variables to be a notification channel between producer and the consumer it's very similar to the notification system on your mobile phone you don't keep checking your messages but whenever a message comes you'll receive a notification that's exactly what a conditional variable does now let's see how do we implement this in C++ let me show you step one you create a mutex you create STD conditional variable and you still have your Giri ID and your G data this is inside shared memory between the two threads producer and consumer now the producer in order to send a notification calls a function on this G underscore CVE called notify one or notify all in this case we just have one thread so you can just call notify one on the consumer side whenever you want to wait on a notification you just use the same conditioner variable and then you wait on it a wait takes two parameters one is a lock that we saw before which is a lock on a mutex and then the second parameter is a predicate basically you read this as wait until G ready becomes true so wait until whatever this function is here returns true let's look at the complete code for a conditional variable so we have a producer that is in an infinite loop and produces data it creates a critical section using a unique lock so it produces data puts it in the shared memory and then it sets a flag that the data is true consumer also is inside infinite loop and also creates its own critical section and it breaks until it's notification becomes true it actually waits on two things to happen first this flag to becomes true that's what's happening inside this predicate secondly it waits for a notification to arrive from the producer once this to happen it can go ahead and sample data use it the way it wants to and then it resets the ready flag for the next time now notice that this wait if it gets blocked it calls the unlock function on this mutex automatically that's why we pass ul this unique lock to the wait and if it gets unblocked it calls the u1 lock automatically why does this happen if it gets blocked it has to unlock the Mutai so that other threads namely producer can enter the critical section and produce data if they want to and if it gets unblocked it has to lock the mutex automatically so that it can sample data from shared memory without being worried that other threads can corrupt the memory so again remember wait does two things if it gets blocked it calls unlock automatically if it gets unblocked it calls lock automatically once the sample data we reset this flag and then we can tell the producer that we did sample data so you can go ahead and produce your next data in order for this to happen producer by itself is now waiting on G ready to become false so now two things should happen for producer to pass this line G ready should become false and the consumers should send a notification back to the producer so you can see there is this two phase handshaking between producer and consumer one sent a notification to the consumer and then the other one sends a notification back to the producer this way they can lock and send data and then the data wouldn't be lost the consumer uses it and then once it uses it that tells the producer to produce the second one there's some sort of a flow control happening between them using one conditioner variable and sending and receiving these notifications so again in this diagram we saw that we use the conditioner variable just to send a notification using this notify one function and then we receive the notification using rate what we saw in the code was the exact translation of this diagram it's very easy very straightforward what do we need to know about conditional variable so the conditional variable should be shared between the sender and the receiver you create a shared mutex and the sender you lock on that mutex typically it can be done using lock guard or unique lock then you modify shared data wipe the lock is held and then finally you call notify one or notify all to notify the receiving threats for sending notification namely calling these functions you don't need to be inside the critical section so if you remember here we came out of the critical section and then we called notify one same thing in consumer we called notify one outside of the critical section this part doesn't need to be in the critical section now on the receiver you use a unique lock so you don't use lock guard anymore you use unique lock because you have to pass it to the wait so that it can do automat lock and unlock that we just saw if it gets black or unblocked and that's really it I think at this point we all know what the conditional variable does how to create it how to send notifications and rate on it this wouldn't be a complete lecture without a homework assignment so these are some homework assignments for you guys to have something to think about number one C++ provides notify one and notify all please go ahead and check the documentation for these and let's see if you can come up with a method what if we only want to notify a specific thread the functions that are provided are only notified one and notify all how can you just send a notification to a specific threat question number two how to break the infinite loop in producer and consumer so I put the producer is waiting in this infinite loop the consumer is also waiting in this infinite loop what if at some point the producer wants to tell the consumer I don't have any more data so please don't wait on it and move on with your life how can this happen now that we know about critical sections let's go a little bit deeper and have a closer look at them remember that lock is what creates the critical section what you need to know is that lock by itself is an atomic action so this lock here will not get separated into multiple operations it will get executed atomically lock is a blocking operation so if some other mutex has locked this one blocks the calling thread that is important now prior unlock operations and the same mutex synchronized with this lock that means an unlock and this thread synchronizes with this lock this has a specific meaning which we'll get to it in just a minute and then there's gonna be undefined behavior if the thread that already owns the mutex cause lock again so for example here i have g mutex lock if I call lock again that is illegal that will create an undefined behavior you need to have a lock and then this thread cannot call lock again it has to call unlock before calling another lock what do we need to know about unlock L also is an atomic operation so this operation here by itself is an atomic operation and it doesn't get interleaved unlock synchronizes with the next block on the same mutex and again the behavior is undefined if you call two unlocks sequentially without having a lock in between now another important thing to remember is that we had this race condition and in order to resolve it we kind of prescribed critical section mutex and lock and we use this as a medicine but just like any other medicine it comes with side effect namely deadlock so deadlock is a very important side effect of mutex that should not be forgotten any time that you write a multi-threading program you need to analyze your code and make sure it doesn't deadlock remember we had this case that one thread was waiting for another and then the second thread was waiting for the first one now you might have longer cycles you might have one thread waiting for another and then this thread for some reason creates an exception and never reply back these are all very bad situations and you need to make sure your code can handle it one last important thing about mutex is that it's a little bit more than creating mutual exclusion namely it also provides something that's called sequential consistency sequential consistency is something that's defined by C++ standard and here we just talked about it briefly consider I have two threads one runs this function eft-1 which creates a critical section and adds the global variable G of X by one the other does the same thing but adds G of X by two because we have critical sections either F two runs first then F 1 or F 1 runs first then f2 they don't run together but now the question is when F 1 runs who currently is that the result of this assignment will be visible to as to when G of X updates we learned previously that the result goes inside the cache which may not be visible to as to the cache on this tray may not be visible to district so how does this work correctly fortunately C++ standard requires that unlock of this critical section synchronizes with lock of this critical section remember that there is an implicit unlock in the destructor of this la carte and there is an implicit lot inside the constructor of Locker now this synchronization means something important it means that the values written by f1 are required to be picked up by f2 that means C++ standard requires the implementation to solve this problem so the implementation requires to make sure whatever is written here will be accessible to f2 that's why we can use mutex not just for race conditions but also for sequential consistency and have peace of mind that when one thread writes as long as it's inside the critical section the other thread has access to the result so you don't have to be worried about anything there's nothing really new to learn while you code but remember that if you want some memory location to be picked up by the other you need to put it inside mutex and then you're protected against these cases that one writes in the cache and then the other one does not see the result from the cache of the first thread now let's look at conditional variables more closely and go a little bit deeper one question for conditional variables is that can wait wake up spurious lis spurious Li means when I'm waiting inside the thread can I wake up without anyone actually sending us a notification and weirdly the answer is yes and it has something to do with OS implementation don't ask me why there is the reason for this is outside of the scope of this video but just be aware that wait this wait can be called and passed for no apparent reason and that's exactly why we use the predator this predicate protects us against spurious wake-ups so if the wake up for no reason we check this predicate if it's not true then we go back to sleep so it's important to always include this predicate as long as you have it you're protected against these spurious wake up's now question number two is that can the notifications get lost basically can the producers send a notification to the consumer and somehow this notification would not be observed by the consumer suppose we were not careful and they didn't have this critical section so if I remove this lock and I also remove this predicate now you can observe that we have a race the producer tries to send a notification the consumer waits on the conditional variable now if because of the spurious rate it passes this line then the producer go ahead and send a notification this notification will be lost so if the consumer runs first passes this line because of spurious breakups then the producer runs next and tries to send a notification the notification gets lost that's exactly why we added this critical section so the answer to this question is no because they have mutex which provide sequential consistency and we also have predicates remember that we put all of these variables inside the critical section this is important to put not only data and read but also the conditional variable now if threat one runs first and tries to send a notification because of sequential consistency threat to that runs next we'll observe the notification this is guaranteed by the sequential consistency now if thread two runs first and braids because we have the predicate if this weight makes up spurious lis because we have the predicate it puts the thread two back into sleep until eventually this notification arrives and we receive it and then thread you can sample data so in short all these protect conditional variables using the mutex unlock and always use a predicate as long as you do this you find you don't need to be worried about the details let's summarize what we learned today if you have shared variables you need to check for race conditions and in order to solve race conditions you use critical sections using mutex to create mutual exclusion and also create sequential consistency you use conditional variables for thread communication and also message passing namely you use conditional variables to send notifications lot and rate are blocking actions so you need to be aware of deadlock we did not talk a lot about deadlock but it's a big problem that you need to be aware of I hate to say this when it comes to multi-threading sharing is not caring so you need to avoid and minimize shared variables as much as you can this shared variable is the mother of all the problems what's next I still want to continue creating a few more videos on multi-threading there are some topics that I would like to talk about one of them is lock free programming using STD atomic this is rather confusing and although critical sections and mutexes can solve a lot of problems this is another method another tool that we have the other topic that I would love to talk is about thread safety and STL in particular and is and how STL containers are thread safe up next I would love to talk about efficient multi-threading and in particular I would love to talk about thread pools and how they are managed and created and lastly this one is my favorite I would love to talk about abstract multi-threading in abstract multi-threading you can say hey I'm kind of sick of dealing with mutexes and logs and all these things why not abstract them out and create a method that I can just write my thread in parallel with other threads and not be worried about these low-level constructs so there are various methods here for example communicating sequential processes which I really like CSP for short I would love to create a video on this there's also an actor/model this is another alternative to CSP and finally MapReduce which is vastly used by Google and I would love to talk about this how we can use it if we can create our own MapReduce library ourselves let me know which one you like more and let me know if you have any questions in the comments thank you very much for watching this video I wish best and hopefully I can see you in the next ones make sure to subscribe and like this video and hopefully I see you in the next ones
Info
Channel: arisaif
Views: 9,175
Rating: undefined out of 5
Keywords: mutex, conditional variables, lock_guard, unique_lock, shared_lock, scoped_lock, threads, thread synchronization, C++ threads, c++ multi threading, multi threading, multi threading in c++, C++, sequential consistency, Computer Science, Computer Engineering, Computer programs, Algorithms, Bazel, Visual Studio Code, c++
Id: jwJ4Eh_2Umo
Channel Id: undefined
Length: 44min 43sec (2683 seconds)
Published: Sun Jun 07 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.