welcome to our Python concurrency tutorial my name is Santiago and even though I love to be in the same room with all of you sharing you know this tutorial in the same room in Pittsburgh it's great to try it out in this format so I'm very happy we can do it in this way I have adapted this tutorial from the regular version that I have prepared and had a ton of stops in the middle checking out exercises to this online version we have separated the exercises in a second a second chunk so we're gonna do all the the lessons in this recording and then you will have the time to check out the assignments and the projects so as I told you my name is Santiago I'm from Argentina I work for remoter calm actually co-founder in order to come some time ago and it was why I acquire biiian e we do courses so we are used to recording lights and all that we do online courses we do data science courses networking courses cloud computing courses check it out check us out I need to come and right now I am working in my personal time in this library which is parallel the objective is to provide high level high level interface for concurrent code even higher level and concurrent on futures which is the library we're gonna see in this tutorial of course so let's dive straight into the contents of this tutorial in the first section we're gonna do a little bit more of a conceptual understanding of how computer works computer architecture what's the role of each one of the pieces we have in a computer and also the role of the operating system and then we're gonna get right into coding we're gonna see multi-threading multi processing we're gonna see thread synchronization we're gonna see promise with deadlock the gill multi processing concurrent load futures and finally an introduction to par load library that I have I'm working on but again it's important first to understand why we need to do concurrency what we need to write concurrent programs let me start first telling you what this tutorial is not about ok because it's important for me to set the expectations and you know what we're gonna be talking about and what it's gonna be out of this cope so the first thing is we're not talking about seen tayo or all these other alternative libraries it's a different model it's also useful to create to create concurrent code sync i/o but it's not the subject of these these two literals a little bit more classic we're gonna do two multi-threading multi processing and that's it again a sync i/o is a potential sista toot for everything we're doing in this tutorial but it's not in the scope we will not be doing low-level programming thread programming even though I mentioned something like the fork process or the forked process spawning process will not be doing low-level programming this is multi threading multi programming multi processing is not a replacement for distributed architectures all right so if you have a website for example and whenever you get a request you need to do a couple of things concurrently usually that's better place in a job queue at a typical task you you know you can use rabbitmq or this provided service like sqs if you're using jungle you can checkout celery but it's not about that and you should not confuse it you it's important to understand the need that you have and what's the right tool to solve that problem this is not about pipeline link clustering or distributive computing that's better suited for something like dusk or is parking which you have multiple computers processing something in parallel okay this is not it's just one computer multi-threading multi processing and even in the same computer you can do genie in GPU parallelism this is not about that you can check that out in rapid out io two very interesting library on top of CUDA which is Nvidia but it has a pipe it's a Python API to work with data frames sort of you know they have like and synonym for each of the important data science library it's like Pena's they have data frames so I could learn they have kyun ml so it's interesting but this is not about that either it's interesting to understand where you're sitting in there there is this very interesting model very simple which is you can do you can have a task that needs to be performed or can be performed in just one core in a single thread single process code just any script 95% of the tasks you have will probably fall in this category of just one core and that's great there is then one step moving for one step it's two to eight cores we could say today two to 16 32 cores something that fits in your computer you you have this this intensive task but you put your computer to run it takes 30 minutes an hour 2 hours and it's done and it's possible to do it in your own computer so that's the second step and then you have the other step when you have more than this threshold of CPUs which is in this case it says 9 plus can be 16 plus 32 plus again it doesn't fit a large computer a large commercial of a commercially available computer and that's when you need distributed processing but in this tutorial we're gonna be focusing in point number 2 2 2 a 2 to 16 to 2 32 cores whatever you can do in your own computer why do we need concurrent programming what's the usage of it well the idea of the F evolution of CPUs in computing it's interesting this chart which is great the source is including this light shows how the the CPUs have progressed throughout the time and what you will see is that the frequency of CPUs right here it's has tale writes that just all flat in this number of megahertz is just staying there and it's not moving CPUs are not getting faster we have kind of sort of reached the maximum level in terms of frequency because of a number of reasons because heating they over hit because of power consumption because they don't fit in the tiny place we have to put them so we have reached some sort of a limit in there but what we haven't reached the limit of yet is the number of cores so number of logical cores you see that it's going up very fast so in that in this past 20 years it has evolved from single core machines to 100 cores for examples perfectly possible when I started doing computing related things everything was one core it was crazy to think about two cores a dual core was a crazy theoretical thing that we knew it existed nobody had one in today today's world is not crazy to get a 64 logical core computer or CPU rights it's completely possible so we're getting into these second order of magnitude and that will probably keep increasing I don't know what's a limit at some point we will we will get an asymptote right we will hit us some sort of limit but so far it's still increasing so the objective is with concurrent programming is to make use of all these cores whenever it's possible all right so we want all these we want to take advantage of all this course the the speed isn't going up we have the same speed so we have to distribute our work into multiple cores as many cores as possible for that to work that's objective that's why we need more concurrent to write concurrent programs and usually the tasks will be different for each one of you I think games are a very good example of multi-core architecture in which you have multiple things happening at the same time you have the character running you have rain happening you have bullets being fired by enemies you have multiple things happening in the same time right and you can take advantage of all these cores to provide a smoother experience let's start with computer architecture this is the basics is pointone basics let's go back again to the basics computer architecture and this is von Neumann architecture it's just very plain old are very standard architectures all are computers today are using these architecture it's based on a CPU memory a memory unit in this case it's rum and i/o everything which is within IO and it's again it's the simplest model we can have and basically what we have is that given the operations we have in our code or the instructions we have in our code they're usually gonna fall if you want in each one of these categories some operations will be CPU they will be performed by the CPU some other operations will store something in memory some other operations will connect to IO it's important to relate these to the access time of all these resources so for example accessing something the CPU is a lot faster than accessing something in memory or even IO and I want you to keep an eye on this because this is gonna be very important later so for example this is a very interesting compression in human relative times if if one CPU cycle is one second accessing memory memory which we know it's fast accessing memory takes four minutes that's how slower how much slow is how slower is memory compared to CPU accessing your your hard drive even if it's on a solid-state drive it's gonna take one point five to four days that's how slow it is hard drive access all hard drive plate mechanical drive it's gonna take one to nine months accessing request network requests can take five years to eleven years again relative times compared to a CPU cycle so this is important it's gonna be are important when we know what parts of our code we have to make run concurrently in that case we're gonna decide if our if our code is is IO heavy it makes a ton of network requests we will know when to paralyze that or the same thing if our if our code is CPU heavy makes a ton of CPU computation CPU bound this is gonna be important later trust me we're gonna jump now to the operating system and the role of the operating system it's very interesting to learn about the history of operating systems and how they evolved it's very I personally a love it I have write a couple of books and it's very interesting to understand the process the the humanity took to understand how much we needed an operating system and why but basically an operating system is just a program it's just someone sat and wrote an operating system it's a program but what we have understood with time is that computers are a very precious resource that we can just execute random programs on top having direct access to CPU memory and i/o it's very common for me to download an application from the internet and run it on my computer but my computer also I have a ton of privileged information and without the operating system let's imagine for a second that there is no operating systems and compete on each program you download can just access anything any resources they want it's very hard to trust off those programs so that's why we have created operating systems we have created a layer that sits on top for hardware right so on the right we have all our hard work like our precious resources and we have put a layer in between any random code you can think of you can execute on those resources so the operating system is the guardian of those resources any any operation you want to perform it's actually going through the operating system and the operating system is gonna have control over what memory you can write or what memory you can read where you can write or read files etc that's all part of the the protective nature of the operating system of course the operation have many more usages like paging paginating algorithms and handling discs and this drives in all those things in our in our in this case and important to understand the protective nature of the operating system in order to run your code the operating system will use the concept of process remember you can't just execute your code directly you have to hand over your code and say to the operating system hey I want to run this piece of code can you do it for me and the operating system is going to put that in what we call process so this is our code and the operating system is gonna put it in this right here in this container which is the whole process that will contain a separate number of things it's gonna have your code it's gonna have a reference it's actually gonna load your code in memory it's gonna have a reference there it's gonna allocate rum or memory it's gonna say these process has these many bytes are located of memory to use it's gonna have all the local variables file scripts file descriptor sorry all the things we need to access so for example here we started with x equals 1 we're incrementing it the operating system is keeping track on memory we open a file we asked the operating system to ask the file for us and we have a reference to that final script err so the operating system is creating this abstraction the process so our code can interact with the system through this process in touch exam abstraction so whenever we execute code in this case whenever you do Python your command well it's actually happening is that the operating system is creating a new process and it's injecting that your code in there and it's executing that so you can actually run the same program that you have written this dot pie file you can execute it in multiple you can execute it multiple times you have multiple processes running concurrently in the same computer that's what we can see right here sorry that's those are all the processes that I have in my computer running after running after extorting all those processes and again there are all different instances of those processes you can see right there process ID that means that there is a different instance of each one of these processes they're all executing the same code but they are all different processes so what about process concurrency and this is the the very interesting part about learning about operating system history at the beginning let me just go with the slides but let's say we have only one CPU I'm gonna take you back I'm not that old but I'm from an era where there was only one CPU let's say you have only one CPU in your computer that's not what happens today but let's assume it's what what you have we have only one CPU in your computer once if you is one worker just one worker how many processes can you run in one CPU at the same time that's the question of course you're gonna run only one task at a time there is only one worker you can run only one task but even when I was a child and I had a one core computer I still had interesting smooth enough experience I could play I could play doom for example the first version of doom and I had only one core so I fire a bullet I move my enemy dies I get far out so how's that experience happening with only one CPU if the CPU can have process one thing at a time I fire a bullet and house the CPU just I mean the keeping track of the bullet and everything else is frozen I can't for my enemy can move well because of what we're gonna call time slicing or the scapular of the operating system so even with one CPU let's keep this this hypothesis here we're working with a computer has only one CPU even if there are multiple processes being executed at the same time the operating system is gonna schedule them in and out right and it's gonna give them a little bit of CPU time to each one of them just one CPU the operating system is gonna claim the CPU it's gonna give it but it's gonna give a sign some time to process one it's gonna reclaim it it's gonna sign some time to process - it's gonna reclaim it's gonna sign some time to process 3 so it will give you the impression that there are things happening at the same time when in reality everything is happening not is not at the same time subtly so in the in our example for game a simple shooter in C we know one CPU in a one cor era basically you fire a bullet the bullet travels for a piece of a second then the CPU is transferred to a character than the CP is transferred to the enemy and everything is there it's a very very fast context switching for each one of the in this case there are not processes but yeah between the processes there is a very fast constants context switching which gives you the impression that things are being run in parallel and this is the difference between concurrency and parallelism concurrency is handling multiple tasks at the same at the same time not at the same time literally they'll be parallel but starting multiple things and have to manage things that potentially can run at the same time parallelism is actually when two things run at the same time in a one CPU computer you can't have part lism you can have concurrency you can't have part lism that's basically the difference with partisan so this is what part lism couldn't look like right so if we go back to this slide there are no two moment in time when there are two tasks being executed at the same time there's always the the OS rights changing switching the the the main CPU the CPU not main the only CPU CPU time from process to process and and this introduces complexities because the operating system is also process program by itself so whenever the the moment that the operating system is switching the context of a process the operating is system itself it alternates sometimes run so that's interesting so this is a parallel system we have this is another hypothesis we have two cores now we have two CPUs and each CPUs have one of these blue lines and basically what it's happening is that now in these moments in time we have actual pearl ism because one core is taking care of these tasks and the other core is taking care of the other tasks so now this is actually parallelism you will see also that at some points the CPU is it'll this is very common Idol it's very common so again what we're saying here is that the CP the operating system is the one deciding when each one of the of the processes will run it has full authority to which CP which process is going to be run at a given time and that's a very important thing is is moving back and forth the process the camera the operating systems and again history of operating systems I'm kind of a nerd on it but operating systems realized that there were different type of tasks and there were multiple time slicing algorithms creative in order to understand when an operating system should grant access to the CPU to a process or not when it should schedule it in or out right take it out take it in and basically there was one big realization there was related to the nature of the task that was being run remember our access times if our process is is CP is io heavy you want to give it a ton of CPU whenever it needs it so whenever the CPU Mnet's they whenever the process needs to run on an IO task you want to give that process the CPU because you know it's not gonna take long it's just gonna fire up the the request for example is gonna say you give this the process time it's gonna say oh thank you now I need to read a file just that's it you take out the CPU you are sent to another process and then start writing the file and that's gonna take a lot of time we saw it already you have four days now to read that piece of the file for the process so different processes given their nature if they are IO bond or CPU bound the operating system is gonna treat them differently it's gonna give them more priority or not and and what might be counterintuitive usually IO bound IO heavy processes should get more priority in their CPU our location again this is gonna be important later so how are you gonna make our code concurrent or even parallel ideally we were talking about multiple processes so I could tell you you know you have a problem you need to process a large file it has I don't know a billion rows and you need to process that when you process that you write your code it says for line in lines blah so you'd realize that it's sequential it's very slow and you know you should make that program concurrent I could give you an answer right now just write your program so you it can receive a parameter right here and just create multiple processes instantiate multiple process at the same time process file from one from line zero to 100 million I don't know run program from 100 million to 200 mm Rose so use the shade 10 times the same process with different pieces and you're done right that's that's a good answer it's gonna get the job done of course that you want to run everything concurrently in your program you wanted to create one program that can spread its work across multiple threads or processes that's what we want to do so the first part of we could say intra program concurrency is gonna be working with threats and that's what we're gonna talk about right now the objective again is gonna make is gonna is gonna be turning a sequential code into a potentially parallel code so let's see an example let's say we have to pull data from three different web sites it's a slow web site and it takes two seconds for each one of those requests in a traditional code these are traditional code we make everything sequential it's gonna take two seconds to get the first web site it's two seconds two more seconds to get the second one two more seconds to get the third one in total it's gonna be at least C six seconds if it can be even more if you have to combine it at least six seconds to process the sequential program and this is a visual representation first web site second words the third web site and at the end the processing so you can't the key part here is that you're not you don't start fetching the second web site until you haven't finished the first web site this is the idea of multi-threading is gonna be associating or start everything at once right so everything can hopefully run in parallel and then reach a common point to synchronize everything back this is the idea of multi-threading so if we can do that if we can spin multiple threads and they all run concurrently or in parallel we're gonna first wait for all of them to finish it's going to be around two seconds and then we can do the combination at TM our code has gonna look like it ideal it's gonna look something like this it's now the reality it's just pseudocode but we're gonna see the the abstraction of a threat to understand it a little bit better the threading module is what we're gonna be using and I'm gonna just give you a very quick introduction first we're gonna be saying some code it's in a Jupiter notebook we're gonna do a very simple introduction and then we're gonna dig into the more important parts thread synchronization and all that what I want you to remember is we are working in an intro programming setup right we're creating our own code our program it's gonna be using multiple threats and we're gonna make that hopefully concurrent so let's just jump directly into our code and let's start working with threats it's finally time to see some actual Python code we've done the whole conceptual introduction to about computer architectures operating systems processes threats conceptually but now it's time to talk about real code creating threads get them to run etc so a couple of important notes here we are gonna be using the thread class this is our major the major class we're gonna be using throughout these first couple of lessons in which we're gonna create threads we're gonna instantiate them and we're gonna start them we're gonna get them running and we're gonna analyze them check out their status etc but everything is gonna happen in this thread class and this thread class is contained in the threading module so this is a very important thing because we also have a underscore thread module in Python 3 but this is a very low-level module that you should not be using we don't use it I have never used it the threading module is one that is using underneath the underscore thread and it's providing us with a much high level interface right for for us to create and manipulate threats so the thread class again this is a major class most important one we're gonna be using to create and start the threads when you create it you're gonna pass a target this target is the function that it's gonna be a run in a separate thread right so the remember that when you have main process of your main process you're gonna be creating a separate thread that it's gonna run by its own these thread needs some sort of callable it needs some action to perform so we're gonna say which action we wanted to do or to run based on this target so we're gonna first initiate the thread there's gonna basically create these the the container of the thread would say this is the thread and we're gonna be passing the target so in this case we're getting saying target equals simple workers so it knows right that it has to run the function simple worker let's say the function simple worker decode a simple simple worker is defined here and then we're gonna start the thread the moment we start thread it's when actually the thread is starting and it's performing its job it depends what your function is about the target function what it's doing and that it's gonna say if the threads finish finishes Adam I have not automatically finished at some point or it runs forever it's very common to have threads that have a wild truth here so basically we want to have a back one a background worker that it's checking on some status as long as harp okay our application is alive in that case you might see here one through we're gonna keep running this thread forever it's gonna be in the background it's gonna be do some computation some checking in the background but again the important part here is we have our whole process these are Python process and we're gonna create a few threads let's say assume with me we're trading several of the threads instantiating them just creating instance T 1 T 2 T 3 equals thread on a target we're gonna pass the target write that it's gonna point to a function in this case simple worker right that's dargol the thread is there it's stale it hasn't it hasn't start started running it's gonna start running when we actually evoke the start method in that moment the thread is gonna start its performance so let's actually do the code here I'm gonna define the simple worker function I'm gonna instantiate the thread remember nothing is happening what you can expect here what's gonna happen that when we start the thread where we actually started thread we're gonna see a halo printed out it's gonna slip for two seconds and we're gonna see halo print it out so I'm gonna start thread we see the hello we're gonna wait and now you see the world but the important part here is that I can't I still have a full control while these thread is running so let me put this thing too for example five seconds I'm gonna really define the function I'm gonna associate a thread I'm gonna do here a simple computation two plus two and I'm gonna start a thread and I'm gonna I can keep working on my computations that the thread is running in the background in this case is slipping right but at some point there you go it returned back it run that final function that it had and it in this in this particular moment the thread is dead we say we're gonna see about the yeah is a live method the thread died right it just completed its work and it's now stopped so a usual common thing to do is to create several threads all together so in this case we have all these threads here I'm gonna put a semicolon here so we don't see any output and I start all the threads and thread start slipping for some time we're actually generating some random values slipping for that time and working again everything is happening on the background I still have full control in the main thread to do whatever I want let's do that again I can keep running this thing and the thread is outputting the result so let's talk in more detail about thread States as I told you when we create a thread it's there it's stale we could say is it alive no it's not alive yet it's there ready but it's not alive the moment that I started thread now the thread is alive and you're gonna see if a live method is true something important is that remember we when we start a thread the main right when we started the worker threat to put it away the main thread still has full control what happens if you want to pause and wait for the threat to start to stop or to finish so actually let's say you have these process right and and you're aggregating data or whatever and you started all these threads right they are all working with data but you need to stop and unpause until all of them finish and one once all of them are finished now you can process the data in that case you do want the main thread to block you do want the main thread to wait until that given thread or several threats they all finish and to do that we have the join method so I'm gonna instantiate the same thread again I'm gonna start it I'm gonna jump directly join and as you can see here they my main thread now is paused it has just stopped we are waiting for them for the thread we started to finish the join method again is what's gonna pause the main thread and wait until that thread or not given set of threads they all have finished once the threat has finished multiple methods will raise a runtime error in this case the thread has already been stopped or it has actually finished already so it can't be started again you have to create a new instance of the thread if you want to start the same task again let's talk about thread identity and this can be very helpful for debugging to understand better your code or to organize your code in a better way thread identity means that we can set a name for our thread right we can in this case the the thread of the name is set automatically but if I show you again the constructor of name of the thread you're gonna see that names equals in this case by default is none so the thread class a threading module is gonna give it a run the name not random but a sequential name thread something on each thread will be assigned a unique identifier a unique ID we are gonna say just ident in event in this case so I'm gonna say in this case the event parameter or attribute is known but once I start a thread we're gonna see that now it has been set up to a given value now that the threat has started at that point it has these ID which is just numeric for us to identify that particular thread no two threads are gonna have the same ID right that's an important thing we can set up our own custom name with worth starting the thread and we can actually consult that information from the main thread we can check what's the threads name in that particular case or ID something interesting is that we can also check these values from within the thread so here is an important conceptual thing and let me go back again to our drawing board if I have this is my remember the outside box is my process Python process the inside box is the Python thread which is gonna run a given function simple worker in this case we can create several of these threads right so I'm gonna define all these threads let's say we have three threads and they are all pointing they're all gonna be executing the same function right the way we define a function is by just by defining the function is gonna be running in the thread is by just defining a simple function right I'm not saying anything crazy here just basics right it's just a regular Python function but what I want to say here that we're not making this function prefer to know which thread it's gonna be running the same function has to be defined in a way that it's useful for all the threads we create one two three a thousand threads they can all run the same Python code in the form of that function so what I mean by this is that the function if we need to use the name of the thread and if we new need use the ID of the thread we have to make it generic enough that each thread running here potentially in parallel right or concurrently to be more precise they are all executing the same code but they're all gonna have different IDs and that is what we're gonna achieve with these two very useful functions current thread and thread and get indent which are generic dynamic methods that are gonna give you the particular let me stop this thing that are gonna give you the particular value of the thread itself and this guy is gonna give you the whole thread current thread the function current thread it's gonna give you the the whole thread by itself in which you can then ask for the name as we're doing here t-that name and you also can get the identity their ID that was generated and in that case just get it end it's gonna be the number we have so let's actually use the same code to create three different threads three different threads each one with a custom name we're providing and we're gonna start all of them and now we are waiting for them to finish so bubbles blossom Buttercup they all finished and when they started they had internally each one of them they had their own Hyades so so far we've worked with very simple functions they are not receiving any parameters which is starting there and they are running and this is not of course realistic usually a function receives parameters it's very simple to pass parameters to the thread class to pass arguments we could say it's a little bit more difficult to work with dynamic situations like for example keyword arguments or yeah different type of parameters we need to create dynamically based on the use case and that's why one of the reasons that I have created the parallel library but we're going to talk more about that later for now I'm going to tell you show you how simple it is to pass a few arguments to a given function in this case we have to find the simple worker function again which receives now a time to slip so far we've always defined this case randomly how much the function was slipping in this case we're gonna pass that value as a parameter the way we're going to do that is as usual we create an instance of the thread class we pass a target we pass the name of the of the thread and we're gonna pass the set of arguments and in these arguments class arcs are a new class parameter we're gonna pass all the values are gonna serve as arguments for the function in this case it has to be a topple and as we have only we receive only one parameter I have to put this comma right here so I don't want you to get confused about that but here is basically a list of all the different parameters you wanna pass to your function so in this case I am running again and you know here for example bubbles right here is slipping for three seconds blossom here here is living for 1.5 seconds a different alternative way of creating and instantiating a thread and running in etc is not by providing a target function by itself but creating a subclass of thread under finding the behavior of the thread in the run method so this is also very common and if you have good architecture a good design based on object oriented programming in your code these could potentially organize your code a little bit better for example if you have this background thread we've talking about instead of defining a function separately in a different module on the thread in a different module you can just put the thread you can give it a very obvious name what's the purpose of the thread and get it to run without defining any external functions usually the functions we use for a thread usually I'm going to say 80% of the time it's a very particular function that is not used anywhere else so it doesn't make sense to define the function that global scope if it's gonna be just used by a threat that's why again you can define the same functionality within the run method the run method receives nothing just self the only parameter we usually pass all the parameters in the constructor of the of the class write the initialization method of the class and you here you have to be careful if not to step over the parameters of thread so you can usually if you're passing barbel number of arguments cetera how to pass it all here the good news about defining your own classes that you can do pretty much whatever you want in the need method and that means that any short comments you have with arguments right can be fixed if you want with a custom class particularly I prefer to create subclasses because again it organizes my code better I prefer to have this rather than has this particular functionality and everything is encapsulated in the run method during this tutorial and I have to be completely honest with you I am NOT gonna be doing subclasses but I will use a lot more the target one because it's easier to see the function defined separately so just for the clarity of this tutorial I'm not gonna be using subclasses so often but let's see how it works I'm just gonna initiate the class there you go T now is an instance of my thread and I have passing an only parameter the the number the time to slip I'm defining or I'm setting that parameter as as an instance attribute and now in the run method I can use that parameter right in the run method so I do t dot start T dot start is running the run method and here I can access all the attributes that I need the name attribute for example it's interesting remember that the name attribute is set even before the thread starts so I can just use it directly not the same as with the identity the ID of the thread which needs to be consulted in kind of a real time I don't know in ending in the live dynamic manner so let's talk about something very very important conceptually and it's this property we have discussed it a little bit of already about threats using or having the having access to shared data alright so using our our previous conceptual analysis picture of our processes and threats we said this is our whole process the yellow box and it has encode tauren and has defined a few local variables again this is the whole process the whole process will then instantiate a few threats and those threats will start in that moment all the threads within a process have access to every define variables in that given process by itself so in this case we have time to sleep was defined outside of the function and it's of course defining the main process when I create my threads and I'm going to start only the first one so you can check so you can see it you see that here is sleeping for two seconds because it's what we have just defined so let's reinitiate them and run them all again and you see that all of them all our threads are running by two seconds let's change this thing I'm going to put three seconds one point five shorter and we define and start all of them and you see that they are all starting by one point five seconds this is interesting but you because you can change the behavior of your threads by altering the state of a global variable so let's say we have an exit exit underscore threads equals false right so here inside we could do something like while not exit threads we're gonna keep doing a background process right just run when we want all the threats to stop we can signal that by changing this verbal in the main process you said you say exit threats equals true and now the next time this thing runs is going to find that variable changed we can modify the state or the work of a thread by modifying these global variables that's an important thing this actually will introduce of course the problem of rights conditions and stepping over shared data we're gonna talk about my about that in our following lesson so this was a very quick introduction to how Python threats work I don't want you to memorize everything we're gonna be doing a lot of work so it's gonna be very familiar by the end of this tutorial how threads work how to create them how to instantiate how to store them etc so I want to finish these part just our our first approach to threats with a real example of our threats and the way the run and all that to do that we're gonna be using a web server that I have included in this repository that is basically going to give us prices of bitcoins so I'm gonna be instantiated here I'm gonna if you check the structure of your repo you're gonna see crypto examples right here and this is a flask application that I can show you real quickly its cryptic samples it's flat hoc this one right here and what these applications gonna be doing is returning prices from different cryptocurrencies and exchanges and all that the reality is that we could have consulted a real service by doing this tutorial but to be honest I don't want to hit an external service by doing our tutorial because potentially you can be loading overloading a server just for the sake of the irrigation so I took the time to recreate the application for only for this tutorial so let's start the the app and we're gonna put we're gonna put slip no no sleep there you go and it's running in this URL there you go and it's a very simple app and what we're gonna have is all the exchanges are part of our app they're all here all the symbols or all the currencies we support and then we can consult prizes of given dates let's see if there is a price here I don't know there is a price here so for width Enix BBC this is the price of that even date the way I have created these simple up aside from the code is by getting information from where is I think it's right here and is notebook from crypto watch API you can follow all these notebooks if you want to see the process that I follow to create up but basically I downloaded the information from this public API and I download them all in CSV files and then I instantiate it sequel Lite data P database so the flask app is reading the price from the database so that method price is actually performing this query you're gonna get the price for a given exchange even symbol and a given date we perform that query and we'll return the result if any if there are no results we will choose to return no so that's again a quick introduction of how our app works so it's running we can see it we can see it right here and what we're gonna do is I'm gonna instantiate PI base URL we're gonna use the requests math module that it's used to perform HTTP requests I'm sure you're all familiar with it and we're gonna perform a simple query here see what's the price actually let's let's follow the same price here so we're gonna see BitFenix BBC but we're gonna change the date and we're gonna get the same price potentially open let's see close 7 - 47 closed seven to four to seven point five so it's the same price price sorry again for both of them so now why are we using this up we're gonna be using it rather than the entire tutorial what I what we want to do here is we want to check prices of a few different cryptocurrencies of on a few different changes but to make things more interesting what I'm gonna do is gonna restart the server by providing sleep parameter and this is an artificial time for the server to sleep so we check right here if slip we're gonna slip each after each request is gonna be delayed for this given number of seconds and this which is informal here will help us simulate the process of a slow server and that's why we need threats if you remember from our conceptual explanation we said let's say we want to consult three prices we have what I consult what do we have here we're gonna check always for BTC and we have BitFenix BF we have bit stamp stamp and we have kraken these three exchanges if each request is delayed by two seconds right two seconds because we have artificially slow down the server if we make this sequentially in that means no threats at all just as you know it you're gonna you can do a for loop you can do or less comprehension whatever the total time that it's gonna take you to run all these things it's gonna be six seconds or at least six seconds around six seconds right because you're gonna make this request slave for two seconds let make this request stay for two seconds make this request slip for two seconds and finally the process gonna be done if we run all these tasks of getting the price concurrently that means and kind of in parallel right I'm using these two exciting in interchangeably until we see the concept of the Gil and all that but if we run all these concurrently and we say these are all running hopefully let's say they're all running in parallel that means that the whole process is going to be finished in about two seconds and that's area of using threads so let's let's try it out now I'm gonna instantiate the threats with the exchange spoon I'll use we're gonna use these three exchanges and we're gonna measure how much time it takes us to do the whole request so for each one of the exchanges we're gonna be this is sequential the sequential process by the way we first asked for width and X then four bits them and then for cracking this is taking us a 6.84 seconds all right this is sequential we check a price first we sleep it just you know blocks we then check the other one then check the other one this is a sequential one that takes six seconds but now let's do it concurrently we're gonna define a function which is check price that receives exchange symbol date and a base URL we're gonna use from the default one it's just gonna check the price so now I can start one thread for each exchange that I have set so you have three exchanges are gonna be creating three threads and what I'm gonna do is I'm gonna start the time start the threads start counting and now we see that all the prices BitFenix Kraken and bits dump they have all finished in about 2.30 5 seconds and this is what we are expecting from threat we're expecting sequential sorry concurrent close to parallel execution to sped things up now a few things here we can't we can't be sure which one is finishing first to be honest in this case kraken finished first if we run this thing maybe another one can finish first not everything is so linear in in the work you have to do in real life in this case where artificial is living for two seconds in real life these requests may be slower than these requests so you don't know how it's gonna end up and you also see that in this case these two things were written in the same line that's because there is some there are some issues right some shared state or side effects that are affecting that we're gonna see more about that in our next lesson Bregan the idea here is it were spitting things up by concurrently running the three threads to consult the prices of those three exchanges so this is wonderful right let's say we have let's say if let's follow this example and say we want to get we want to get prizes for all the 10x exchanges we have in our system three symbols beta C LTC ether ether and we want to get all the past 30 days in total we're gonna be making 900 requests can we start 900 threats following this pattern creating one thread per work can we create all those 900 threads the answer is usually no we cannot because threads will if we go back again to this picture they will consume resources in the process so we don't want to clog the entire process with a ton of threads working concurrently so we're gonna see how we can fix this with multiple ways mainly we're gonna use the producer-consumer model we're gonna follow this example exact example in which we will create a pool of threat let's say 10 and they're gonna take care of running all the requests but again what I'm saying here is be careful write the summary of thesis be careful how many threads you're gonna create it depends a lot on the system you're using and we're gonna talk more about that there's a formula the Python module uses to calculate how many threats is optimal but that is it finally as a summary remember threading is the module were using do not use threads underscore thread sorry because it's a very low-level module you don't want to get messed up in there so let's move forward with thread data and read or embrace conditions let's talk now about what are the implications of having shared data in our threats in our previous in a previous notebook we saw how multiple threads can access given local variables or actually global variables in a process right they're actually local to the main thread it's the notation is confusing basically threats can access share data this is interesting because we saw we could control the behavior of threads by just all by mutating different variables that are set in the global scope of the process that can be convenient but it will also introduce a few problems and that's what we're going to talk about right now the first problem we're gonna see is the issue with raise conditions in which conceptually speaking and this is very conceptual race condition is gonna be and I have here links to the Wikipedia article but basically a race condition is a problematic condition something we don't want to have in a program in which the outcome of the program will depend on the way that or the order that some instructions are executed that's something we don't want let's say today our program outputs five because I don't know thread 1 run runs before runs thread 2 and tomorrow it outputs seven because thread to run first and ran thread 1 run later so we don't want to have just sort of random behavior in our programs because one thread but the other and run first right we want third humanistic approach we want things we are her about we we don't want our program to run successfully today because thread one one in the race competition and tomorrow it fails transferring money incorrectly or I don't know grunting access to a user it hasn't paid because another threat run first right we want our programs to be deterministic so I'm gonna show you the problem of race conditions with this example we have a global counter varville that it sets zero and we're gonna define this function increment in which we're gonna run in threat we're gonna create ten different threats and we're gonna make them run a given number of iterations we're gonna cast we're gonna say each thread to run a thousand times so right we're gonna create just to follow we have we're gonna create we have a global counter variable starting in zero and we're gonna instantiate ten different threads ten here we're gonna do ten different threads right there these are ten threads and each thread is gonna run 1,000 iterations here of this code 1000 work on that as a parameter but in this case we have to find 1000 1000 repetitions of that given iteration incrementing the counter by one right so they're all incrementing the share counter by one right that's what they are all doing um what could be the expected output of this let's say forget about threads for a second let's say you run this thing sequentially you run first the first thread here you run a thousand a thousand iterations so the output is gonna be 1,000 or sorry not the output the value of counter after these first thread runs is 1,000 then you run the second thread and these one increments all the counter by ten by 1000 again sorry so this because you have two thousand here and then this finishes and we have another thread a thousand iterations now at three thousand right so the output at the end of this thing is gonna be equals to the number of threats we have we're gonna say number of threats times the number of iterations in our example we have ten threats ten times a thousand iterations so our result is gonna be ten thousand that's gonna be the result the final result that we are expecting in our correctly executed but slow it doesn't matter but correctly executed program we're gonna have ten thousands gonna be the output well we're gonna see in the wrong behavior in the problematic rice conditioned behavior is that these threads will be stepping into onto each other and they will be mutating data here and there and the output will be different than ten thousand that's of course problematic we don't want that to happen so let me clear up all this thing and we're gonna run the example we can define the increment function the iterations variable we're gonna instantiate the threads and we're gonna start them all to run the whole all finished this was very fast we're slipping them for just a few milliseconds and now well it all worked now sorry the threads fail so this is interesting in the first example not something that usually doesn't happen and I was actually thinking about trying to replicate it in the first example it worked you know and that's the problem with rice conditions and this is a great thing that happened you might run your code and it might run correctly like the first example it worked but then you tried in production and it breaks it the worst thing it is it it doesn't break in this case I am making it break on purpose the problem is that you have an an incorrect result which if you are confident about the code because you run it locally and it worked or the tests are passing in production you will you will trust this volume of counter although again it's a faulty one so let's do it again let's try create new threads and see how they work well it seems like keeps failing now and check the results that counter variable it's always different it's like completely random just whatever 32,000 in this case it's 47,000 it's just oh I'm not changing the counter there you go so I I thought a thousand and seven hundred sixty sixty six sixty yes so again another value and resetting the counter and it always changes the valley it's completely random it's completely random you don't know what the value it's going to be right this is out the result of a rice condition and why does this thing happen well it happens because if you look into the details of these operation counter plus equals one what you're gonna see is that internally there is no way of performing this operation in just one step in reality what we do if we have of values II the seed sorry that it's zero and we want to increment see what we do is we create an auxilary verbal with the value or the the code equals c plus 1 so that is now 1 and then we set the value here we do C is equals 2 outs that's the usual process that computers are going to follow so that again that's like 2 3 operations of list the fine ox create this sum get this result and then set it back again to see in this moment if you have if you have parts of these being run by different threads they might be stepping onto each other's data let's say we have counter equal 0 there's a whole the whole counter and we have these two threads that are starting concurrently 3 this is t1 and they start with this operation create ox and counter there's going to be the same for them ox is gonna be equals to C plus 1 for them for both of them is gonna be the same equal C plus 1 full of for both of them but these two run at the same moment exactly the same moment in parallel that means that C 4 T 1 is gonna be 0 so it's gonna be 0 plus 1 but it's also gonna be here 0 for numb for three thread sorry number two so in this moment the result of a is gonna be equals four you do the same for both of them it's gonna be one here that's gonna be one here then it doesn't matter which one wins setting back the value here but basically we run two operations and they both got the same volume right at the end is gonna be just 1 what we want here I'm gonna clear this thing up one see what we want is that these two threads if when a reads the value c0 plus one we won the first thread which is C plus one we want it to wait until a equals one until these one puts the value here and now this can go on read it we want the threads to be isolated and we don't want them to collide at the moment of reading or writing data and we are gonna achieve that with what we call thread synchronization this is a very big deal in computing it's a very big deal it's gonna happen in operating systems down under basic systems if you want to read more about it there are tons of books writing about them about it you can get any operating system textbook and it's going to talk about there's gonna be a chapter about synchronization I guarantee it so it's a very big deal in computing and the way synchronization works basically but in a very conceptual manner is by signaling States signaling that I am in this moment I am accessing to counter so please stay away by signaling that I have just finished updating counter so now you can read it etc by creating signals and informing that someone is currently using something and that's something that shared resource is currently busy it's it's already being used and a very good example is this this recording light this is from our own studios in reality I took this photo in which as a human if I want to use the studio record the recording studio which is a share research there are several instructors and we all use the same recording studio if I wanna use it the sherry sirs approach now reached the door and I see that the light has been turned on I will not use the resource I will not use the recording studio because that means that someone else is using the studio I will work I will wait sorry for the light to go off and then I will step into the studio because I know that someone has just finished using that resource and now I can get in potentially there are gonna be multiple instructors waiting outside and then the question is which one's gonna reach the studio and turn the light on first right that's another issue with synchronization so conceptually speaking synchronization is protecting shared resources by providing these signals by providing these cute that this hints right saying someone has already used these racers and the big deal about it is that synchronization is usually cooperative it's not that the light has a physical power that it's stopping me from getting into the studio if I'm a bad instructor if I'm a bad thread I can open the door anyways and interrupt the structure in the middle of the of his or her recording session right and that's catastrophic they're gonna be losing two hours of recording because I step into it in the middle of it for example if they're in a web in a live webinar I'm completely destroying their work but I'm I'm I'm stopping and waiting outside because I am a cooperative instructor right I I decided to stay outside but nothing is stopping me from actually walking in and the same thing is gonna happen with our threats our threats will use synchronization methods but they're all comparatives that's cooperative that's because we have decided to write the code in that way in the best of our intentions we're writing the code to use synchronization but if you have a malicious piece of code or a sloppy programmer someone forgot to use that synchronization mechanism then nothing will prevent the shared data to be corrupted so let's start now in particulars we're going to see our first synchronization mechanism which is a lock it's a very simple it's probably one of the oldest sort of synchronization primitives that we use and there are multiple synchronization mechanisms like locks semaphores there are multiple ones in this case we're gonna be using lock again it's one of the simplest one is usually a mutual exclusion lock it's also called mutex has several names basically a lock works as a real lock you know there is this share resource and there is an open we're gonna I'm gonna try drawing a lot it's an open lock someone uses the resource so they just shut down they close the lock when they are when they're ready one when they're finished using the lock they're gonna open the lock and now it's gonna be available for someone else to go and take it for someone else to go and use it so the way it works is by we create one instance of the lock the lock will be share we're all using the same lock and the thread that is gonna work on that lock will try first acquiring the lock this is basically I wanna use this resource these resource what we're doing right here in between so when I use this resource I'm gonna acquire the lock so now I owned this lock so nobody else by using by doing this I will be guaranteed that nobody else no other thread will be able to acquire the lock so the operation acquire on a lock is atomic if I get a true output out of this that means that I am the sole owner of the lock then I can do for and work with any it can do whatever I want usually once you acquire the look you're gonna perform some operations on that shared resource you have so let's say it's the counter variable the moment to increase the if counter is in that particular moment when you have acquired a lock any work that is not potentially it's not going to suffer from a race condition potentially it's gonna stay out of the lock because the log operation can potentially slow you down if the resource is busy you will not be able to acquire the lock and you will not be able to do that work so usually anything that is not dealing with sure day that's gonna stay outside of the lock once you acquire the lock again you do whatever you want hopefull is just gonna be related to share data share resources and then once you're done you release the lock you say I am done whoever wants to do this work now they can acquire the lock again so let's see how that works I acquired a lock I did something and then the lock finished there you go sorry the lock finished the I slept for 10 seconds so that means that I'm going to be sharing or sorry I will I will have the lock acquired in this thread I will keep it busy for 10 seconds and then I am releasing it so what happens if I try to acquire the lock while these thread has lock acquire well it's gonna block so let me show you that TDOT starts started acquire the lock the lock acquire is the lock looked yes it's locked what happens if I try acquiring it I'm gonna run the code again I'm gonna increase the time here so you can see very clearly what's gonna happen I'm gonna start a thread again it's locked and now we'll try acquiring it and as you can see the process is has just stopped it's waiting to acquire the lock they acquire operation it's gonna block until the the thread whoever whatever thread actually successfully acquires the lock okay now the main thread has a log and we can use a drawing to simplify this so we have this thread we have let's put a Sherlock right here and we're gonna have this is open let's say this is open and what happened here was that in this line start the thread acquired a lock right so in this case is this one is that let's put T the the owner of the lock so when the main thread or main code in the process tries to access the log again here it tries to lock and it's locked so it's just waiting there it's waiting it's waiting until the lock is released once the there you go I'm gonna clean this up once V the thread here releases the lock it's empty now the main thread can take ownership of that lock so now the log is owned by this main thread but now I have this thread finish it's done I have created a new thread here and I put it here okay this is done when it if I run it again and it goes here in this line at tries to acquire the lock that thread is gonna block forever at least until I release it so that's what we're gonna do I'm gonna start the thread it has it's trying to acquire the lock it's slept there right just stop there this is blocking the thread is waiting and what I can do right the thread is waiting there it's waiting it's waiting it's waiting it's waiting what I can do is from the main thread I can say well now release the lock just release it and I'm gonna do that I'm gonna release lock immediately the local choir said right here look acquired and then it finished slippin and all that because I didn't pass in time but the idea is that the thread was stopped and waiting it was blocked because I I the main thread had the locked acquired the moment I realized it that other thread was able to run it so using all this that might be confusing we're gonna use a real example and we're gonna fix on our counter remember after 1000 operations ten different threads were waiting we're expecting to have ten thousand out in the final result in the final encounter so the way we're gonna do that is for it in each iteration before we modify this shared data this important shared data we're gonna acquire lock so at that moment we know that nobody else will update that counter we will have sole ownership of that counter we're gonna update it and then we will immediately release a lock so anybody else waiting to get and get the doc will be able to do it so I will initialize an initialize counter initialize increment or define incrementing the final log remember that I lock is a rare resource - we all have to use the same lock if we are using different locks doesn't make any sense and I'm gonna create all the threads and now I'm gonna start all the threads and they're working I'm gonna shine and wait until they finished they all finish pretty quickly and let's see the result of counter counter is 10,000 as expected let's do the whole thing again 10,000 I can do these a thousand times and I can guarantee you now that these will work because no two threads will be modifying counter at the same moment now let's go back again to the problems that we could potentially face with threats and Kronus ation the first one is the issue with and we say that this was a cooperated task that we were running I wrote this code and I was thoughtful enough to put a lock before access encounter but again that will require that me understanding the problem me being careful enough to increase including lately picked a lock or my coworker also being you know awake at the moment of CO reviewing to let me see that I'm forgetting a lock etc so there are multiple things can go wrong have four listed problems right here the first thing is you might forget to use locks you know I don't like if you are if you're just in a hurry and you're modifying some global variables you might not realize that you might be stepping into a race condition so not understanding race conditions correctly not understanding share data correctly just might be an issue you know of lack of experience when you're starting to write your write your first concurrent programs you will lack that experience so that's a problem by itself the second problem is that you might forget to acquire the blotter lock if I remove this line and I haven't tried this arrow just go ahead and do it if I remove this line I will not execute the code if I remove this line and and nobody actually acquires the lock it's like you know having an open locks like having no luck at all so the problems will arise anyway it's in this case we are required to acquire the lock at the moment we we need it and on top of that the lock is kind of a a philosophical word we have in our in our code but it's not protecting counter nothing is protecting counter I could have I could have modified counter before the lock and you know they asked me how to use the lock yes I did use it but nobody is saying where I used it and this is a pretty dumb example sorry it's just like final technical lines of code but in a more complicated program in which you have a ton of shared data and multiple locks all scatter around these is going to be a problem you might put the lock acquire in the wrong place or you might forget it to put it at all then it's the problem of your critical section again doing something that the lock is protecting or not and finally what happens is a big deal what happens if you forget to release a lock if I forget to release the lock all the other threads will be blocked forever if I have a bug in my program and I'm not releasing the log look sorry all the other programs or threads sorry they will all be blocked forever let's actually see a problem with that I'm gonna create a new look and I'm gonna define this function right here I'm gonna start it and what it's gonna happen here is that I'm gonna pause faulty I'm gonna say an error when in the sleep parameter so what doesn't happen here is that this code is gonna run it's gonna acquire the lock I have a release so let's let me let's say I submit a pull request you review this code and you see the lock here release and you see here the choir and everything makes sense and you say hey the code this code runs perfectly but there is a problem what happens if this slip parameter is invalid as it's gonna be here the moment that these code runs is gonna raise an exception and the thread is gonna be stopped altogether so that means that we will never reach this section and I will never release the lock so now let's run it it blew up an exception the lock was acquired so now this lock is still in this acquisition process state that nobody else can acquire just it's gonna hang there forever my code is now hanging forever I'm gonna interrupt this artificially there is no way of doing this in your code live but again this thing locks can't do anything about it so the way we can fix this is by passing a timeout in the acquisition process so let's say I want acquire the lock but I say I'm only warning of wait here for two seconds because if the log hasn't hasn't been released in two seconds that is potentially a problem you can put whatever value you want here or you can even go out of us to the point to say I want to acquire the block they look I do not want a block so now the result is false if the log was not acquired and true if you have successfully acquired the lock so these will not block we can release the lock and now all works so this is a very common problem you've read the code there is an acquire call and there is a release code call but anything in between anything in between before the release if something fails the lock will be acquired forever it will be quite forever because the exception will prevent this line to run so that's a very common pattern in programming in general when accessing accessing databases when accessing files when accessing on networks when accessing these important costly resources and there is a way in Python to overcome those difficulties with the usage of context managers right so the with statement is a context manager in Python and what it's gonna do is gonna run basically this pattern right here it will acquire the lock it will try running this critical section and if anything fails it doesn't matter if it fails it will always release a lock regardless of the condition if it works or if it doesn't if it blows up because because an exception or if it doesn't blow up it will always release the locks that's a pattern that the with statement is following so we're gonna do that I'm gonna instantiate the lock I'm gonna start it the lock was acquired look was a car and now we're gonna run the example with the problem the one that blows up the code blew up so that means that at this moment it stopped but as we are using the context manager we will see that the code is not locked and we can acquire the lock immediately again this is the pattern that we are using right here so finally to fix the code with the with statement with the context manager the only thing we're doing is before I'm incrementing counter what you cannot just use lock with lock we are ensuring that we will acquire the lock in this point we do whatever we want to do and then right here the lock has been released and it should all work as expected there you go ten thousand everything is working so even though we started our lesson using acquire and release this is actually not recommended to I recommend the way to acquire and release a lock is with the width statement is a lot shorter and you will never forget to release a lock it's a lot simpler just to say with lock and use the context manager as a summary of this lesson or this notebook we've talked about share data we talked about race conditions in the problems with them and we've also talked about thread synchronization there are multiple mechanisms there are multiple tools that have been built to improve with to synchronize multiple threats all these tools there are manual tools to put in a wave they're all cooperative and there is no free lunch we could say there is it's not that by using the tools you will ensure forever that all your code will be cracked that's not the reality and sadly it can also go wrong even if you use those synchronous mechanisms and that's why to be honest we will try to stay away from synchronization as much as possible once we reach this point right here it's gonna make a little bit more sense but first we're gonna see another issue with thread synchronization which is the big deal of getting into a deadlock as promised we're gonna now talk about vet locks and you should be scared because a pretty pretty scary thing happening in real life and we should avoid it we're gonna start first by by understanding when dead looks will happen the first thing we're gonna do is we're gonna simulate another raised condition so I'm gonna just run this thing right here yes explain what happens there are two accounts each one of them with $1,000 and we're gonna start two threats which what that what they will do is move money around they will take ten dollars out of this account and put here so now this one is gonna be nine hundred ninety and there's gonna be a thousand and ten yeah it's gonna take I don't know five hundred out of this it's gonna be four hundred nine it's gonna be one thousand five hundred and ten in here it's gonna move the money around there can't be money created all the money we move from here we place it here and all the money money we put we move from here we place it here it's a way a regular transaction works so the threaded starting with a from account and a to account and we say move from a 1 to a 2 for the first thread and we say move from a 2 to a 1 for the second thread when they're moving money around what happens is that at some point a race condition arises and they find incorrect balances I am NOT preventing negative numbers but again an incorrect balance happens basically the total money in these accounts they has to be 2000 so we're gonna take an approach to fix those those that issue that rice condition with locks basically we're gonna create two locks one for each account so we're gonna say look from unlock two from unto account so a1 to a2 and the look for a1 is gonna be look from unlock for a2 is gonna be look to and the code in order to run the code what its gonna do is it's gonna first acquire the first look then I'm for the second lock and it's gonna move money around it's gonna create the some check with some everything and then it's gonna move forward if for any moment it finds the incorrect behavior or condition of money being created basically it's gonna stop we will find that potentially this will this will work to put it in a way run it and seems like there is no money creative until right we reach day iteration limit at least right we run for a million times and at least and that until that moment everything seemed to work we can do a second test and see if everything works it's waiting a million operations there we go it finished the the iteration and the sum is still two thousand dollars there was there was no money creative or or you know lost the state is the same everything seems to be correct if I check the logs they're both unlocked this again seems to be working but there is a potential very dangerous situation that it's waiting for us and I'm gonna show you that right now the only thing that I'm gonna do I'm gonna reset the accounts is I am going to change the way the logs are passed lock one lock - I was passing before on two so I was using the same log for each account I'm gonna change that and I say look one look - look - look one and I'm gonna start those threads and we're gonna see that these will never end I can sit here for a thousand hours and this will never end I'm gonna interrupt this for now and I'm gonna check the balance of the accounts seems that they are still balanced but both locks are locked they're both acquired the issue that we have just faced and again I can try running this again it's gonna block forever the issue that we have just faced is what we know as a debt lock in which two resources are locking right share resources and no look can move forward because the other locked past the research that they need there is a when I was in regular software engineering school more than ten years ago in our operating system class we were using this very popular book it's a very book very good book about operating systems very conceptual very low-level Yudin iterated if you're not terribly terribly interested about it it had this quote that says perhaps the best illustration of a deadlock can be drawn from a law passed by Kansas Legislature legislature sorry might that work not good for me early in the 20th century it said in part when two trains approach each other at a crossing both shall come to a full stop on neither shall start up again until the other has gone so there are two threads approaching an intersection they both have to go to a full stop and no train can move until the other has left but if no thread can and if no train can move that means they're gonna be said then waiting there forever so this is a good explanation of what's happening right here in our code the first threat acquires the first look it's good remember for a second I'm gonna actually copied in the code I'm gonna paste it do we have it again here there so remember what's happening there first when he tries to acquire the first lock it succeeds acquires the first lock threat to try to acquire the second look its exceeds but then thread ones needs both locks in order to proceed so when it goes to get the second lock it was already owned by 32 so it can't and it sits and waits but when thread 2 goes to this place and to block one and tries to acquire the first lock it was firstly acquired by threat 1 so it sits and waits so they both have a share resource and they're waiting for the other one and no threat can move because they both are blocking something that the other thread needs so this is a very bad situation it's very common in computer science talk about deadlocks also about a servation and a few others we're gonna talk about that locks mainly the issue again with deadlock is when share resources they are acquired by one of but one thread in this case and the other one needs it but it not other one of the needs that the lock is also waiting for some other piece and it's a very common problem the usual procedures to prevent logs or deadlocks sorry is ideally not synchronized code at all we're gonna again see more about that here but not to synchronize control and if you need to use locks manually never lock on something forever ok so remember that your acquire method had a timeout never look on something forever always give it a chance to clean up roll everything back and start over so in this case what we're gonna do is we're gonna thread one it's gonna acquire the first lock thread two is gonna acquire the second lock thread one is gonna try acquiring this lock it's gonna just give it up I don't know one second chance if at that time one second it hasn't been able to acquire the lock it's gonna release this lock it's gonna go back release this look and start the whole thing again that's not fast to be honest we're introducing a ton of uh inefficiencies but it's gonna prevent the deadlock no thread will set will sit waiting forever and a lock if they see that after some time it hasn't progressed if they haven't been able to acquire all the logs they need they will just stop roll everything back release everything and move back again so that's what we're gonna do here we're gonna define a very tiny timeout we're gonna wait only for this amount of time and what the code follows these we're gonna do the regular thing we do with it before we're gonna define this locked varville and that the thread will try to acquire both locks so this one and this one second one and it fit with a given time out if they were able to blow to lock everything the valley will be locked in other case if they were not able to lock everything the they will release the lock that was acquired and everything is gonna start again because locked it's gonna be kept as false so again it will try acquired a lock let's say the first one was acquired correctly so this is here this is true the second one was it's gonna wait it's gonna block for zero point point zero zero one seconds and then it returns false it says it said I did I wasn't able to acquire this lock in this amount of time so then if we're all the locks require no true and false one of them was not acquire so locked so this is not the code it gets in this branch and is res one acquire yes it's acquired so it release it is rest to acquire no so there is no need of releasing it and then it goes back again to the beginning lock this falls and it keeps moving the process so we're gonna try acquiring the lock kind of forever we could potentially give it a number maximum of iterations but in this case we're gonna wait forever until we can acquire the lock and I'm gonna run this code it's gonna take some time but as you can see we have reached the iteration limit so that means we never deadlocked and everything is working correctly so again the process is we need to acquire the locks with a given time out we should never block forever acquire the lock would have given time out and then if everything happened we do whatever we need we have now both locks in other case we're gonna move forward and try to do the same thing again and again and again until we actually look the resources so what's the summary of these let's say first three lessons all together is that and this is a very funny image in it from a mozilla developer what we've just learned is that it's very hard to write correct concurrent code using synchronization techniques it's very hard there's always a bug in there there's always a deadlock can arise there's always a knee and unfortunate race condition can arise it's very very hard to write synchronized correctly synchronized code either deadlock starvation race conditions any of these things can arise and it's very hard to debug the code and understand when you're doing something correctly on what or when you are not okay so that's kind of the the summary of this whole lesson you are watching this tutorial because you wanted to use mom multi-threaded code probably concurrent code I just want to warn you that it's not gonna be simple to keep that correct you have to do a thousand tests you have to make sure that to have a ton of ice on top of that code reviewing it correctly because if you hit a deadlock in production that's the worst thing can happen your whole system will be blocked forever until you go and money on lists of it so something we don't want to do moving forward we're gonna see more a more real-life approach to work with multi-threaded code that it's gonna solve multiple of the issues we're gonna we've faced in these first three lessons we have pointed out two major issues when working with concurrent code with working with multiple threads when writing multi-threaded programs the first one was that if we had too many tasks to perform right like in our example we want to want to check for 900 prices in our crypto currency price server these were too many tasks to assign each one to a separate thread we couldn't create 900 threats that was the first issue we pointed out the second one it's of course the most complicated want to deal with which is shared data and synchronization we said that writing synchronized code is very hard it's error-prone it's hard to debug there is always an issue there deadlocks can happen corrupted data can happen or multiple things can go wrong when writing synchronized code what we're gonna see now is a partial solution to many of these problems both handling a large number of tasks and also we will see a technique that will allow us not to synchronize code if possible and this is the producer-consumer model in multi-threaded code producing consumer is a very generic denomination for multiple models in this case apply to two multi-threaded it's means that we're gonna have two main groups of threats once one group of threats or usually just one thread will be the ones producing work to do creating tasks putting them in a work queue and then consumers threats other threats will pull from this queue in order to see what tasks are pending to perform right so we have some producers some threats that are creating the tasks that need to be done and workers that are pulling those tasks and actually performing the work the important part here is that all this is synchronized in these queue which is in Python its thread safe that means that we don't need to synchronize it because there will no there will be no memory corruption there will be no race conditions when putting objects or gaining objects from this vacuum so what we're gonna do and this is why we're solving these two issues is first we can create a given number of consumer threats we have 900 prices to check so our cube will have 900 tasks and then let's say we have 10 threats or 30 threats whatever let's say we have 10 threats then there will be 90 prices check per threat there will be always constantly ten threats running and they will pull prices work do the work and when once they're finished they're gonna put another price and do the work and that's gonna keep in repeating until there is no more work to do that is usually the process so we've shared the issue with too many tasks and again as this collection the queue is thread-safe we have also solved the issue of synchronization we will not need to manually synchronize our code so the cube we're talking about is actually from the queue module it's a thread safe queue and it's a passive very simple API we're gonna instantiate a queue we can check if it's empty we can put objects with a pool method and we can check its MP again it's not empty of course the size of the queue and we can get objects out of the queue a few important things I don't know how familiar you are with with data structures in this case the queue is first-in first-out a b c is the way we put the objects ABC is the way we got out the objects you can create a last in first out of Q 2 if you want usually call a stack at the pants I don't want to get too deep into the data structures the idea is that you're putting on one side you're getting from the other side and again in this case the order is respected an important thing is that as this is a threats a few you might there are a few things that are no not synchronized to put in a way they're not problematic but for example checking if a queue 70 or the size of the queue might be a stale result from time to time because there might be another object pulling from the queue but that doesn't matter usually it's not a problem I don't but again put it's gonna put objects in the queue tasks in our cases poor gonna it's gonna be a task queue for us and get is gonna get the object from the Q the Q is now empty whenever you get an object the object is removed it's not your railing from the queue you are actually getting the object out of the queue in order to process it and that's good because that means that no two threads will see the same task once one thread gets the object the other threads will never repeat the same task they get met the important thing about the queue is that it's prepared to work in a threaded environment in this case that the queue is empty it's like brand-new if I try to get from the Q the Q will block because it's waiting right the queue is its this is a blocking operation in which we are practically saying hey I am ready to process a new ask give me a task to process and as a q70 we will keep blocking until a producer thread on the other end puts an an object and immediately after they put the object this method will unblock and it will and we will receive the task to perform so that's why the get method is made to be blocking you have to be aware of that because you could potentially block forever to prevent that if necessary you can pass either block falls or you can say timeout right a given time and this very similar to the lock wire interface right which require the API of the method that we said that we could prevent blocking all together or pass the time out the important part here is that if you don't get an object out the method will raise an empty it's exception acute of empty and set exceptions so we have to catch that ACK you cannot have a mock sighs right so we can say how many resources were gonna create and in this case if you want to put another object you have to wait for someone to consume the one that was created first okay so you can always be sure that the maximum size of the queue will always be one there will be no more than one element in that given queue and of course these methods will also raise an exception in this case the full exception queue thoughtful exception the usual process then and this is kennefa para de a protocol and algorithm is for the worker threads they will try getting an object out of the queue if the queue is empty right they will pretty much just break out of the code we should we should put here something like return statement if the work if a task will successfully pull from the queue that means that there is still work to do so we get here the worker does duh performs a task and finally notifies that you saying that that given task is now done and this is an important property of the hume object because these will let you also process failures let's say that you get the object and you can also put it back again if it's hasn't been finished for example if there is an error etc so this is this works as a sort of counter how many tasks have been put how many have been processed and then you can just get all of them to a zero State and that means that the queue they work to do it's empty so here important parts is again we will ask to get a task we will not lock or we could include a timeout but if this method raises an exception that means again that all the work is done because the queue has been empty this is valid only for 19 percent of the cases sometimes you have producers creating tasks in the queue on consumers pulling tasks at the same time with the model of way for the queue to get empty that means that all the thought all the work has been placed in the queue and now you're waiting to finish once the queue is empty you can assume that there is no work for to do but if producers are injecting elements in the queue constantly both of them are producing and consuming producing and consuming that might not be the case an empty queue might not mean that there is no more to do but maybe it means that producing producers are not producing tasks fast enough so the consumers have to block and then you can either say here blocking go through which is the default that means wait forever or put a timeout if there are no more job sorry tasks created in I don't know 10 seconds that mean we can give up and we can just you know finish because there's no more work to do so let's see with all laced theory let's see a real example and we're gonna be pulling those nose I have this server stopped we're gonna start we will not make it slip so it's fast enough you see it's working and again the objective is to get these 900 requests as we said in our first lesson but we're gonna do it in this consumer producer consumer model so these are all our exchanges and we're gonna do it for all these dates and I'll just randomly selected 30 days 31 days days days sorry in March and we're gonna do it for all the symbols BTC ether and LT Seim and in total we're gonna have a thousand and twenty three different requests remember we have 31 days not 30 the method the function we're working with check price is very simple it receives exchange symbol date on a base URL that's gonna build the request and the return the response that's everything that it's doing so let's say we randomly select exchange symbol and a date so in this case we're gonna get litecoin from bit stumped on this date now let's check the price and let's see what against okay this is the output of this function sum we don't have prices for all the exchanges in all the currencies in all the dates so these might potentially be non just want you to be aware of that so what we're gonna do now very important is we're gonna initialize a queue we're gonna put all the tasks that we need are threats to complete and again in this case are in this particular example we're in this field or this category in which we know up front all the work we need so we can initialize that you we can put all the work in there just once and there will be no producer threats just aside from the main threat it's not that producers are constantly putting tasks we can initialize the queue saying we want you to complete all these work put everything in there and the consumer threats will take care of that so at that particular moment we can say moment zero of time I'm gonna introduce all they're going to put all the objects in the tasks the tasks have these form it's just a plain old dictionary and we say we want you to get the price from Polonia X litecoin on this particular date what it it's just a dictionary that represents a pending task right tasks that put for all of them we have a thousand and twenty three elements in our queue that what work what we're gonna do is we're gonna define a very simple class price result and what we're gonna have is a dictionary with with exchange date and symbol right so we can keep track of all our prices it's an important note here is that I know because I have read about it I don't own the property research that putting an object in a dictionary in a multi-threaded environment is thread-safe because of the current see python implementation but that doesn't mean the dictionary is a thread safe collection I am choosing it just because of simplicity in theory to make things thread safe we should use a thread safe collection which could be also our resulting Huey Q sorry in a in a following example we're gonna actually have two queues one with the tasks to perform and one with the results or the tasks that are already done we're gonna use two queues because the again there are thread safe in this case I'm just using a simple dictionary for the sake of the explanation this is the code for our worker what its gonna do it's gonna try getting a task to perform and in this case we're gonna do a block Falls okay this is important because we know that all the work has been pre produced up front so all the workers they can be sure that if they get no elements from the queue if it's empty then there is nothing else to do they can just exit taking just returned but if there is a task to perform the thread will make the request check check the price for that give an exchange symbol and date and it will put the price in that given fiction Airy something else that I know is that there will be no two threads writing the same information because I have not repeated exchanges symbols or dates again that's why I can use a dictionary once I put the price I will know I'm gonna notify that get Hume the task you that the given task has been done and I will move forward to while true I will keep repeating until the queue is empty so we initialize also the results class now how many threats are gonna start are we gonna start all these threats that are working here in the background they are pulling from the queue et cetera how many can we start well there are different recommendation it's something that needs a ton of tuning in order to understand what's the the optimum number of threats in a system in the concurrent up futures bandage but we're gonna see seven in the seventh notebook in this package the recommended or actually the default number of threats for up pooled we're going to talk you about later is either 32 or OS dot CPU count plus plus four plus four sorry so it's the minimum of these two things I don't know what's the no this is actually times four we should check it out they talked about oh no it's plus four okay so it's either CPU count plus four so whatever number it's best for you there is this good recommendation here is just about what we prefer in this case I'm just gonna set thirty-two so all the tasks are finishing faster it also depends on how much memory your computer has it depends on the nature of the operations if they are CPU bound or i/o bound something we're gonna talk about also in the multi processing one in the Gil one but for now 32 is a good number so I'm gonna create all the threats you have just created 32 threads again the what the worker receives is the queue so it can get a new work to do a new task to perform and the result so we can publish if you want the results one day already I'm gonna start all the threats and now I'm gonna block on the queue okay I'm gonna sit and wait until the queue is empty basically I'm gonna wait until all the tasks have been clear from the queue and that's why we need to publish that with that task done notification the worker is saying to dick you hey I have just finished doing this thing and once the queue gets back to zero we know that all the work has been done the task is zero and all the threats have exit right queue is empty my work here is done exiting queue is empty my my work here is on accident and all the different threats we started the 32 they have all finished so that's it we're gonna check all the prizes we have there's just a few random prizes you can check there there you go there are some again are known there are more more decimals just there you go so for example for and BTW on these day ether we don't have a price again we have filled all the prizes that we needed with our producer consumer model so again the opponent summary of this part is this is a completely different mental model on in the way we will design a multi-threaded system it's still a multi-threaded system but we didn't need any money or synchronization that's great because the queue is thread safe and also we're putting a large number of tasks in this hue and we're creating this pool of consumer threads are working and we're making sure were not overloading the system this is a number you can always tune the max worker one you can here know where is you can tune the number of workers you're gonna create and you can always keep a report right given this number of key of workers this was the load on the system CPU memory all that and this was how much time it took et cetera you can keep tuning and perfecting the number of workers moving forward we're gonna see the Gil just a very interesting thing and see - and you have probably heard about it already I am sorry but you will not like what we're gonna see in this lesson it's not pretty at all it's actually one of the major issues with - which is the Gil let's let's get started I'm gonna try to to make a good introduction here so just follow me follow the detail of when we're gonna hit the Gil what is the Gil what's the problem with it etcetera so again this is a story for now follow my lead we're gonna try competing prime numbers are actually checking if a number is prime and we will try to see a multi third approach for that so the first thing we're gonna do is define the function is prime given him a number it's gonna tell you if it's prime or not pretty simple so far I have a list of numbers here just the file is you need this file you can check it out the list of let's say large numbers and if they're large but um it's gonna take some time to compute you if one of these numbers is prime or not around point six seconds let's say there isn't some issues here with jaime and even ham in our notebook etc so let's say it takes point five seconds half a second to compute if a number is prime or not if we have ten of these numbers we could expect the whole check right to check if all the 10 numbers are prime it should take around five seconds so make this make a sequential approach right as a not multi-threaded sequential approach in which we check if all the numbers are prime or not and we are gonna expect that it actually took four seconds it's even faster four seconds to run this thing sequentially one number after the other we immediately see that this is a task that we could paralyze there are ten different there are ten different threads numbers sorry so let's assume let's say we have four different numbers we have ten actually and my computer has actually 16 cores this is literally what I have right here in this computer but you know it's pretty common for a computer to have eight 16 32 cores but let's say we have four numbers and have four cores what basically I can do is put each CPU right to work on each number and they all work in parallel right and the final result is computed in parallel on how much time this should is gonna take if each one of these takes half a second right half a second and I have ten of these numbers it's gonna take half a second because they will all run in parallel actually let's say one of them takes 0.6 seconds then it's gonna take point six seconds these ones will be way finished by the time this one is done so if I can run all these tasks in parallel that means that the total time it's gonna be just the time of the slowest task using multiple threads again in the sequential approach of course that's not the case because we're checking one number and then the next number and the next number so the total time is the sum of all the different individual elapsed times for each one of those numbers so let's actually write our multi-threaded code and we're gonna see right here this is way we're gonna run it it's gonna be we're going to create a function check prime worker which is gonna put the value in results list only if the value is prime remember list might not be consider threads thread safe collection in policy pipe and we know it is so this is just for educational purposes we're gonna create ten threats okay one for each number each number it's gonna have its own thread each number is a task we could say so there there's gonna be ten different threads running this task and hopefuls they are gonna all run in parallel I have 16 cores they have a pretty low footprint in terms of freezers in this computer I'm not using anything aside from this notebook in this browser so this should just go very fast if things go in parallel so let's see the result how much the time does it take whoo the final result is 4.2 seconds for the multi-threaded approach so seems like all the threads were running concurrently but they will not running in parallel okay the problem here is we have just faced the global interpreter lock pythons global interpreter lock so let me show you first why we have this issue and then I'm gonna tell you what the Gil is or actually the material first let me explain a little bit better the issue in in counting in at the same time we can understand in a better what the Gil is so what is the Gil or what issue are we having let's let's assume again we had we had for three numbers to process three numbers to process there you go so three eight and seven we need to check if these three numbers are prime in the sequential approach again with it first three then eight and then we did seven the total time of our program was the sum of individual times to check each one of this that's sequential we know that in a multi-threaded approach right we have three threads starting three thread one we have thread two and we have three and we're expecting all these threads to run in parallel right let's just all run at the same time right and let's say let's say this one right here takes 0.5 seconds these one takes point 6 seconds and this one takes point 5 seconds so this is actually the total time when we process all the threats we said it was 0.6 seconds that was the the expected outcome all right we were hopefully waiting for everything to be processing Pirtle so each CPU was assigned a number and everything's gonna finish very fast but that is not what happened the final time was 4.21 seconds what actually happened was that we had this three thread starting and given the Gil the issue with the Gil is that in policy Python no two threats can run at the same time so what happened here was that one threat started started processing and then before it was finished it was transferred the ownership to another thread and then these thread and then that thread so at any given period of time if you get a window of time there's only one thread effectively running even if you have a thousand different cores in cpython there's only one thread running at the time okay and this is exactly what's happening here and that make things even slower because a thread might be paused in the middle of it when we were doing our sequential approach giving each number it's for our full attention of the CPU check if this number is prime check if this number of Brian and then that one it was faster because each process right took care or H each piece of code took care of finishing before moving to the next one in this case you might be partially done and immediately the CPU is actually transferred to these other thread and all the values you have loaded in the CPU are suddenly clear out all the cache everything everything is loaded for this thread and then half of the way half of the work is done these thread now is removed and it goes back to you so you have to load everything back in the CPU and start over so this is why it's even slower to do it in this way and the question is why does Python do this I mean why can't we have threads that run in parallel it's kind of dumb we're saying we want to do Python threading and to speed things up to run things in parallel but the reality the reality is that in C Python the major Python patien out there what you're probably using you can't run two threats at the same time there is no parallelism and the reason for this is it's a little bit more advanced they have actually linked to these talk right here from Larry Hastings that is very good it explains why there is a Gil in the first place and it explains the importance of the Gil but basically the Gil is pythons global interpreter lock it's a lock what we have just seen in our previous lessons it's a lock it's helping python prevent multiple threats to corrupt shared data remember all your threats are running in the same process and they are all sharing the same Python interpreter multiple threats can't run in the same at the same time in the same process because they can potentially corrupt data that's the reason why we have a guilt the guilt the Gil is basically so we were using as users as coders we were using blocks to share to protect our data our variables we wanted to protect them so we created our own locks well I want to tell that Python the Python interpreter also has important shared data it's gonna be seen by multiple threats and the Python interpreter also wants to protect that data so that's why the interpreter created its own lock right now the interpreter coders C Python core developers introduced a quote lock in there so we will two threats will not corrupt that data so I recommend you I recommend you to check out these talk right here it's very good it explains why we have a kill it actually sets the case for why thankfully we have a Gil because that may cpython development development a lot faster back in the day which meant more popular ad but again aside from my personal opinion I think there's a very good talk and it's gonna explain why we have the Gil so what's going on I mean it's it's like I'm telling you about concurrency you're sitting watching this tutorial about Python concurrency and how can we make our programs faster and all that and suddenly I am Telling You you cannot run two things at the same time well there is light at the end of the tunnel there is a final there is a final there is final hope to all these and it's in resort in the resort of i/o bound tasks what we did before and let me replay this thing and change colors here gonna there what we did today was we had two threads let's keep it to two threads for simplicity t1 and t2 and what we said was that no two threads could run at the same time right that was the statement so this one here this one here they're alternating they're both running concurrently like they are both started at the same time but they're no they're they're not running literally at the same time in parallel they are switching you know execution time back and forth so this is what we mean by concurrency ideally what we wanted to do is have real problem having two things being processed at the same time again that's not what happens the code that yeah the coda it's being run here at some point is interrupted and it's shifted I mean the the context is shifted to the second thread these code was interrupted by the interpreter it said hey live no no you can't you can't run here anymore now share the process share some part of the CPU without other thread right there the CPU toward the operating system actually the Python interpreter kicks you out of the CPU time and gives the window the context to run to another thread this happened because these thread I'm gonna say something in mice and silly why was the thread interrupted the thread was interrupted because it was running this is dumb writes like why did I fall I fall because I was walking so if I'm not walking I will not fall if the thread is not running it can't be interrupted and follow me here it's a pretty dumb thing when I'm saying it's gonna make sense in a in a second what is happening here is that as this is a CPU bound task checking if something is prime the these tasks the nature of the task needs CPU it needs as much CPU as it can get CPU CPU CPU CPU finishes with the answer if something is prime or not so as this task is so CPU hungry the interpreter at any point decides to stop it and shifts the context to another thread but that's again because this is a CPU bound task is the CPU hungry task there are other tasks like for example IO tasks that they are short more short-lived for bursts of work like for example if we want to get a prize from this server right we want to get a prize from the server and then we want to do some computation these tasks gonna be a little bit more like do some processing get the prize from the you get the prize from from the server wait until it's done and then do a little bit more processing this part right here for algorithm is spend waiting for result to come from the internet so when we are waiting and let's introduce here I'm gonna change the color again so it makes a little bit more sense let's put a bloom there you go actually these blue here we're gonna have our network network right there I'm gonna use black for the rest so what is happening here is in this sort of do some processing and then get results and wait and then do more processing is that we do a little bit of processing then we request some information from the network the network processes and returns and then we can keep up the work these period of time here here we can't do anything by but wait and do you remember from our conceptual lessons when I show you the time that in relative terms how much time or how much slower and network connection was compared to a CPU can task network is very very slow Network IO waiting for a file to be written or read networking anything that is IO is very slow so the result here is that we're gonna complete out of these here three steps step one do some processing step two get prizes from the network and wait step three do more processing in this scheme we're gonna do the initial process and before we we asked we make they've actually not before but as we make them the network request what we're gonna do is we these threads gonna inform the Python interpreter saying hey I'm gonna sit here and wait for some time so you can switch the context to another thread I am done so and the Python interpreter will now run it will do this thing get the request so here is gonna be that here we're gonna have gonna have these interaction here and then here then here sorry my drawing isn't right but basically we have sped everything up because the threads are cooperatively waiting for something and they can inform the interpreter that they will not be doing any processing soon so the interpreter can be shipped to some other thread so this is sort of a cooperative multitasking in which the thread informs says I am doing I am waiting for a network response I don't need the CPU move it to some other thread and and just check back with me to see if I have the result and I will need the recipie you then so that's why when we run when we run these things in parallel as we did before the total time to check how many exchanges three exchanges is practically the same as checking one exchange if checking the price of one exchange takes point eight seconds checking three of them is just point eighty fourth probably just a rounding issue right if I if I write again every fast well it was actually a lot faster I don't know why but basically again then the Gil is gonna be a problem and this is kind of a summary of the whole talk the Gil is gonna be a problem if and only if you are running CPU bound tasks so this is the moment when you have to start inspecting your code rationalizing it reading it and understanding what type of code you are running if you have a CPU bound code thanks to our two CPU processing hearing then threats are not gonna be such a good idea do you know what is a good idea in that case that can be multi processing multi processing is a solution for our Gil problems we will be able to overcome the limitation of the Gil with multi processing but as we have seen in several occasions during this tutorial there is no free lunch so we're gonna win a little bit of a from someplace with a Gil but we're gonna loosen another so let's actually talk about it conceptually first what we did so far was creating multiple threats we were doing multi-threaded programming our Python process our Python process was creating several threats planning several threats and they were doing work concurrently right we said it's impossible for the Python the C Python interpreter to run multi-threaded parallel code that was what we were doing before in this lesson we're gonna see a way to create multiple processes to run at the same time alright so we want to compute three prime numbers instead of creating three threats we are going to create three processes and each one of these processes will take care of a different threat the Gil was something was created within the Python process to protect shared data in the process so it was only affecting threats the Gil will not affect multiple processes we will be able to run multiple processes in parallel so this is kind of good news what's the bad news well that processes are a lot more expensive than threats creating a process means setting up these whole machinery in the operating system including allocating run memory allocating sharing file descriptors initializing the code stacks heaps all those things we saw in the introduction that are very very expensive before we can actually set up the process work so it has to be justified if you want to create multiple processes here again by creating multiple processes will overcome the Gil there will be no Gil limitations and we will be actually able to run actual parallel code if we have multiple protocols in art machine we will see that code is being run in parallel and that's a good thing again the counterpart is that a process is a lot heavier than creating a threat so let's put it in action we're gonna create processes or what one one important thing the multi processing module is the one we're gonna be using there is also so process module you shouldn't confuse that one it's a multi processing one it's very common to imported as MP and that's what you're seeing right here don't worry too much about it this is just a detail implementation in in Mac OS we have to default back the fork mechanism to keep the code simple in other case we'll have to duplicate file scripters and memory and a ton of things so let me show you the process API and let me show you how it works basically to create a process we're gonna follow pretty much the same API as we did with a thread we're gonna create an instance of a process that was again instantiated from the process the multi processing module and at any moment we can't start the process and we can check until the process is done in this case the process was right there it was done once the process has finished it's important to close it because remember it has a ton of resources associated so it's important to close it a couple of interesting key things here the first one is that we are creating the process and we are let me change the color here we're gonna go back again to a red I go we are creating the process here and we're passing some piece of code this piece of code seems to be shared if these processes are actually completely separated units how is it possible that these first process as its target gets a reference to the code defining here right if they're independent isolated also what happens that I mean I am these processes running here doing some work and it's printing but the output is shown here in my main process well the the concept here which is a little bit more advanced but basically by using the fork method we used at the top what is happening is that the operating system is duplicating the process there is this concept of process process parents and children and it's a it's a low-level characteristic of pthreads and some libraries for threading or low-level but basically we are creating a copy of the thread and any hurts all the code was defined so in this case it was say hello and also file descriptors IO I'm gonna put here these are all inherited and kind of duplicated in these second IO process so that's why the second process it's completely independent and isolated but has copies of pretty much everything we have right here that's why it can run in the way it runs once it finishes we kill the process we free out the resources and we move forward so let's actually do the prime example right so I defined this function is prime wanna read all our numbers and then we're gonna using processes we're gonna define this very simple function that given a number it's gonna print if it's prime or not okay so we're going to create ten processes one for each one of the number this is again you can say ten different tasks and the processes are gonna take care of we're gonna create one process per task we're gonna start them all at the same time we're gonna we're gonna keep the time chart off the time we're gonna start them or I wait until they are all finished and we're gonna see how much time it took let me do that very quickly everything finished in point 76 seconds so these feels parallel remember that checking for one prime number or number of its prime or not was about five seconds running ten of them was point 76 seconds which is pretty good all right so now this actually fails parallel so I'm gonna close all the processes and free up all the resources that we have now what you're seeing right here is that I am printing the result because there is no share data in our previous example I was creating a dictionary or a list that was share data and I was passing that results collection to the worker the worker could compute the outcome and right in that sharp piece of data right so if I go back here the sharp piece of data was here and all threads were putting the results in there with processes we can't do that sadly we can't because there is no share memory I mean there is the variables are under the state is not sure I processes a completely new independent unit thankfully there are a few ways to do that one of them is with Q's so as we use the Q method the Q module sorry we also have a the multi processing module has a few cues we can use that are very similar and it's part in the Q so we have already used the good news about this is that even though our two two processes are completely independent this Q section will be kind of shared so both can read and write from here they can't again I can't change a variable here or I can change a rate of arrival here this is the independent place of each process but we can create these the share queue that can be a rated rate from written to these two these two processes so the way we're gonna do that is we're gonna create two queues so in our previous example we created just one queue and then we were using a dictionary to keep track of the results but something we can't do here so we're gonna create two queues one for work to do and work for work done and we're gonna prime the queue they work to do cube by putting all the different tasks we want to perform in this case all the numbers that we want to check if they're a prime or not we're gonna find how many workers we're gonna have so max equals five just randomly I decided on number and this is the code that we're gonna run the the queue it's gonna do sorry the Pro is going to do task you get no wait this is a synonym for get block equals false and it's also present in the other queue class I didn't use it before it because it wasn't that explicit um once we get the number if we get a number we're gonna check if the number is prime and it and then we're gonna put the result in the result q r s o Q dot put this number and this result these can potentially raise an exception which is Q dot empty which means that these here these queue is empty and there are no more results in that case the process will should just finished everything is in this wild true as usual so creating the process pull write creating all the processes I'm gonna put them all to work and we're gonna wait until they're over there you go they all finished and in work done we have all the results for all the data so you see how how these queue right odds right here acts as this buffer that we can rate from here and we can right here we can put thanks here they can be read here from this process and we can inter exchange information between these two processes in a safe manner there is also the concept of a pipe which is more of like process one it's gonna be something like p1 and then we're gonna have like a tube of pipe right the name right and we're gonna have another process and they will be sending messages between them so sending message and sending message here and there the standard pipe is bi-directional but you can create one that can't just have one direction it's perfectly possible and the way it's gonna work is a pipe will have two methods which is receive and sending okay we're gonna create a pipe which gives you two connections connection on the left side connection the right side if you want left side and right side in this case I've named them main because my main process and worker and work what we're gonna do is define this function that it's gonna receive a number to check if it's prime or not and a pipe connection and once it checks for that it sends a message right sends a matches message here saying hey this number and this is a result it's prime or not I create all the processes and I start all the processes and then I can start receiving all the values from this given message actually they were already all published right they were all published I got them immediately but basically I do connection dot receive for a message and I know that I'm waiting for ten different messages pipes are to be honest not I'm not usually I don't find myself using them so often to be honest if they are an interesting tool to know and know it that it's there but I found that they are usually more error-prone I prefer to use queues and finally the multi processing module has the pole the pool sort the pool multi processing pool that basically will let you create multiple processes and send them work to do with a very high level interface without having to deal with the low-level nuances of shared data in this case we're gonna create a pool of for workers for processes we know there are four workers created here and I get a reference in pool and then I can do an appliance sink okay just run this thing for me and I'm gonna wait for the results so here it's gonna be the result that get to get the result I mean just do that two times and we get these two before they are done or not I am kind of making these synchronous but if you could make we can do something like r1 and let's do here actually let's get this r2 and we can do here for prime is gonna be r1 r2 + 1 and 2 to 1 + 1 2 and now it's a lot faster because I'm not doing it synchronous again they should have in their original one basically I am creating let's put it in these way to see if it's more clear there we go whatever it's more clear I've created it so numbers and then I'm immediately firing them it's like this is not locking it's just you know forwarding everything to the pool saying hey I need you to run this thing just get get started with it at any point I can actually put something like timed I'll sleep here to print waiting or slipping and now print getting results slippin getting results and we merely get the resources were there results sorry not resource the results were provided by the pool so again it's like you finally forget you get a reference to a result and once you want to check the result which is to everyone are one to get so this is amazing because we're not dealing with queues with pipes with anything there's nothing low-level have to deal with which is final tasks and they are as they are completed we're gonna get our result there is another important method in the pool which is the map method which is going to basically map multiple a collection right with a given callable function and this looks a lot like the regular map method in the standard library or alice comprehension so if we do that we're gonna get very quickly we can process all the valleys behind before so what's a summary usually we're gonna be using multi-threading it's a low it's a lot fun internal faster but lightweight to use multiple threats you can't just fire up a thousand processes because you're gonna completely kill your computer threats are lightweight so we're gonna prefer them usually in our work I don't know about you aside from scientific computing data science and all that usual tasks are in my experience they're usually ayob owned right usually most of the time I find myself creating work fixing problems in real life that depend a lot on IO rating files writing files running from network writing to a network etc so in that case when the threats make a lot of sense okay io tasks remember they are well-suited for threats but if your task is CPU bound it needs a ton of computing power then multi-threading will not help you on the contrary it will make things slower for you so that's why you can resort back to multi processing but remember creating processes is more expensive so you have to always keep that in mind and be conscious of the profiling process of understanding what's the optimal number of processes you can create without crushing your entire system so in this is so far all the conceptual things we had to see we we talked about threats we talked about the producer consuming model we talked about the race conditions synchronizations we talked about multi processing and why it's important the parallelism with the Python Gil and now I'm gonna show you two libraries that are high level and they're gonna let us create multi-threaded or multi processing code in a very clean way alright and that's gonna be the first one concurrent with futures and then the library that I have created which is parallel to finish this tutorial we're gonna see two high level libraries are gonna help us or packages but are gonna help us create more abstract multi threaded or concurrent code we could say and the idea here high-level abstract idea is not to get into the internals of creating threads creating processes synchronizing them etc have it worked as high level as possible that's I think the objective of these libraries and the the advantages are clear right we if we don't if we don't have to write synchronization code there is zero chance we actually create a bug in synchronization or deadlock if we don't have to create threads and processes manually there is zero chance we forget to close a process for example and clog our computer with unused resources so as high level as possible the first one is concurrent at futures and this one is a Python built in this part of the standard library there is nothing you need to solve just use it and it has these very neat interface which is the exit Peter which will be either threat-based up or pool based you can create either a thread base executor a pool base executor and sorry process of a ex base executor and basically they both have the same API there are subclasses so any executor whatever it is you can submit tasks and this is similar to the apply Assange that we saw in the multi processing pool apply a sink or submit in this case it's going to drowning the visual tasks or you also have high-level methods as the map one so let's actually say it in practice all right we have this check price function that as usual it's gonna check for our prices return the result we're gonna associate a thread pool executor and running as a context manager in this case they have ten work it workers completely overkill doesn't matter and we're gonna submit a task to perform in this case check price for these values these options and the result is gonna be a future okay so this is the major change in the concurrent futures library it introduced the concept of a future in this case the future is an object we can actually check the interface right here it's an object that represents some given computation that might be happening at this very same moment and it has all these different methods I try to cancel it check if it's running or not check if it's done or not and more importantly try to get the result out of it so in this case once we have the future handle we can do future dot result and we can get the result of the computation in this case this is the price the same thing happens here if by the time you ask for the result the task is not done you're gonna block right for until it's actually done if your timeout parameter was none if you pass time out of course gonna blow up there is also a very useful not very useful it's an interesting method might be useful to you which is an add-on Kovach so it's going to basically be invoked whenever you finish result what else moving forward this is again the interface of the result checking if the result if the feature is done or not attached there is also a map method in the map method is interesting because it has a similar API as the actually the same API as the built in standard library map method it receives a callable and a list of things to do so in this case I am passing these parameters exchange BTC and the Pro and the date for all these exchanges and we have all the results available right there right this is again what were passing for each one of them the disadvantage of map is that you have zero flexibility in terms of the parameters you have you can pass up there let's go map you can always only pass list of intervals that will be arguments positive function so that we can always only pass that there's no support for main arguments there is no support for variable length our arguments attach ours just very strict so we can combine the submit method this is a very common pattern on a function level a module level sorry function which is as completed to do exactly this code and let me walk with with you through this code we're gonna submit we have a list of exchanges who are gonna create a dict comprehension and for each exchange we're gonna submit a job right or the task to perform that is check price this exchange this symbol this date and the value it's gonna be the exchange that we actually used so we're gonna have a mapping a dictionary that is futures two exchanges so we can then use the ask completed function of these futures are gonna get for each future assets completed it doesn't matter the order the first one that finishes is result is result here it's a it's a blocking synchronous function which it's gonna pretty much where as turn with futures as they are completed and you can get the exchange given the futures dictionary that we used so this is involved with the immutable nature of the future that can be used as a key and it can be used to reference it and we can pretty much do this parameter this pattern sorry we submit and we use it as a key of a dictionary but as completed function again it's returning values as they are finished so probably if I run this several times I'm gonna get different order so cracking first bit phonetics bit stump BitFenix bits and crackin it's gonna change the order because just gonna return them as they are completed again the one that is done first gonna be returned here and it's just a sequential ordering in there I'm using a producer-consumer pattern is pretty much the same thing so let's move forward with all what we know already we're gonna prime the queue we have all the tasks in the cube I hear the workers gonna receive as usual task queue and resolve queue to different queues it's gonna try getting something from the queue if it's if there is anything else the queue sent is gonna return another key is gonna check the price and put the result in the queue the way we do it is just we submit a bunch of these jobs with these futures and then we just sit and wait until the queue is the work to do queue is empty basically all the tasks are done this will finish eventually there you go um and we can see that the work done queue has all the results we can pass them all into a results dictionary and use it so a couple of important takeaways here the first one is that we can completely change the behavior of these code by just changing the name here and saying process pool executors here possible executors if you change just this thing here you boom you go from multi-threading to multi processing in a completely abstract way of course that in this case we're using we are using multi-threading queues you should change to MP queues but again changing the code for multi-threaded to multiple multi processing is as simple as changing this thing and the second important thing as you're saying we're always using this thing as a context manager the with syntax because basically the all the resources used by those threats or those processes will be freed up once the work is done okay so those two are very neat features of this current package what I can say here is if you have two and this is a very important summary of the whole tutorial if you can sort if you're gonna do something that needs either multi-threading or multi processing I will recommend you first to try using the concurrent of the futures library its built-in it's bulletproof it's been around for a long time it's and it's very high level okay you don't need to manually synchronize anything so try to get it done with concurrent features first if that doesn't work then you can move to low-level threading or multi processing but first I think the first the main the main reason he has to be I will try to do this thing with multi processing first so let's move to the second library which is a library that I have created so it's parallel it's available on github you can install it very quickly with just beep install python perl and what i try doing in this library was improving the map methods and there are a few others that I have not showcasing here but the scene map a few methods with more flexibility in this case the function parlor dot map I didn't define the function the parallel dot map is receives a function and receives a list of it intervals and potentially extras to pass and what it does it runs everything in a multi-threaded environment by default except if you change here the executor to be multiprocessing instead of multi-threaded so you can change very quickly the executor you are using with a simple keyword argument I've also put a ton of emphasis on good error handling so basically you can pass named parameters or optional arguments for these I'm gonna on purpose I'm gonna break this execution I'm gonna change the base URL and this will not work just gonna break so every every other parameter here is gonna receive base URL equals this this one right here will define will receive base equals base URL equals 84 8,000 and this is gonna break fire on dot you'll see it just breaks but we have included a parameter which is silent silent sorry so you can we can keep errors silenced get the result of the execution and we will see here that bit stumped the variable bit stump here is a failed task so we have this fail task exam abstraction which will let us understand why the code fail what were the parameters and why it failed this is the idea of the fail task so if you're interested in high-level parallel computering just check out the library again my recommendation and we can use this as a whole summary is try to use concurrent at futures first and this is it we've reached the end of the tutorial please keep an eye on the updates for word exercises and projects the pupal be posting it's gonna be important to practice what we've sin and as a quick summary we can do kind of a dirty checklist on all we've learned and how we can place it or how we can organize it in our mind I want you to have the main purpose of this tutorials for you to understand when to use each tool and how to use each tool and what it means to use a given tool the easiest thing is to say let's just use multi-threading always but we know that's not the answer if our 4 task is CPU bound that's not work and and we know how many and even if with the sign threats we're gonna say let's use the threatening module and create threats by ourselves and then we start with all the synchronization issues we saw that can raise conditions deadlocks etc so again the most important thing is to understand the concepts and hopefully for you to understand when to use the right tool for the job first do you need to write multi-threaded multi-processing code at all maybe you don't have a multi-threading problem you have to use a job queue as we saw at the beginning or you have to use something like a big data architecture like desk or spark right so first of all do you need multi-threading do you need multi processing then if you want to move forward then realizing if it's a CPU bound task multi processing or if usually the case an i/o bound task in that case you're gonna have to use multi-threading once you have defined that all define all that you need to run concurrent to write concurrent code you need to use for example multi-threaded let's say then starting work backward or downwards right they are key there are levels of abstractions of the library so you have can you use a very high level library that completely abstracts you away from the fact you're creating multiple threads abstracting you away from synchronization deadlocks and all that if that's available that's great naturally it's available it's concurrent that futures actually I'd say I recommend you to use concurrent at future of concurrent futures as much as possible because its built-in it's instant library it's bulletproof just a lot of ice are set on that library and it's proven to work fantastically so if you can start there high level concurrent futures and then you start saying my problem is getting more complicated can I switch to a different library like parallel can I can I use something that it's PI level but it's a little bit more powerful and if I can't then start going down you know to to the threading baggage but always being conscious of the issues you know the doors you're open and each you my face so again the most important thing about multi-threading multi processing and concurrent programming is understanding when you need to create concurrent software and what problems that will involve so thank you very much it's been a wonderful experience and let's get in touch in Twitter any other medium actually mainly Twitter hopefully I will see you in the next hike on