Learning Golang: Concurrency Pattern Background Job

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello my name is mario welcome to another learning go video in today's episode i'm going to be covering another concurrency pattern this specifically is the implementation of background jobs so what is a background job in the context of this video because we're using concurrency and we're using the primitives in the language like go routines and channels i'm focusing on things that happen in process in memory so in this case i when i say background job i mean a processing charge of doing some something some work behind the scenes it's initialized by by another process another parent process and in practice what this means is go routine launching and go routines so let's look at the code so you can see what i mean by this as usual the link to the examples that i'm going to be showing you are in the description of this video so feel free to clone the repo check out the examples by yourself and you know when you have some free time what i'm going to be covering first is this example where i'm discussing well implementing implementing graceful shutdown as well as the implementation for background jobs like i said in the beginning this is not background jobs in the concept that you have a distributed processing queue for example like rabbitmq that is receiving events and then creating jobs or you're creating jobs based on those events or those messages that the in this case for example rapid mq is receiving if you want to do that i have another video that again we'll be leaving the link in the description so you can check that out this is again in memory using go routines channels and the other primitives that exist in the language so i have three two functions well uh more than two functions so you have listen for work which is implementing the logic for creating the workers i'm going to define in this case an arbitrary number would be five and what it's going to be doing this program is going to be listening for signals os signals in this case sig term so every time i seek is produced or received by this program a new worker will be invoked and it will be processing and doing some work in this case what it's going to be doing is calling this function called do work and what do work is doing is printing out a message sleeping for three seconds and then just printing out another message the important thing about this is that the way it's implemented uh let me close this one the way it's implemented is like i said there is a the or channel that is receiving the messages which in this case i decided to use the notify function in the signal package to listen for signals but you can assume this will be sort of like a an http endpoint for example whatever thing that you're using for receiving events from outside the world from outside your binary and then i define a buffer channel which in this case is using an empty struct but assume that this is the message that you're going to be receiving or the format of or the type that you're going to raise it could be a string it could be a json it could be a binary it could be whatever you want to be an array of bytes or whatever uh and that will be used as the number to indicate how many workers are supposed to be executed so every time i receive a message right here let me move this down every time i receive a message message i'm going to be again launching the routine for doing the work so really the logic what is happening is thanks to this channel that i defined above if you remember i said i want to create a buffer channel and i'm going to be doing a full range to that buffer channel so every time there is a new message i'm going to be launching a go routine so let's see how that works in practice so i have um oops i have this main already one important thing about this is that you need to compile these examples because we are using a signals if you use go run it is not going to work because your you will be sending the message to the go run command not the actual binary binary that is running so you have a process right there that is indicates the process id just printing out the process id i'm saying send the term message right so it's saying it's starting it's going to be sleeping for three seconds and then it's going to be completed if i decide to do this using a4 for example which i'm going to be printing doing something similar to simulate processes that are happening or events that we're going to be receiving or signals that are going to be triggered the idea is is going to be similar so each one of the events are going to be added to the buffer channel then the range is going to be looping for those events or waiting for those messages and then launching the go routine and if i do the same with and let's say 8 of 15 you will notice that it's going to be something similar but you will you will see that at some point it will stop and then continue because of the we reach the buffer channel and you cannot do more because we got the the capacity of that buffer channel is uh already we we exceeded or we reached the maximum capacity of that buffer channel now this is one way to do it and i want to describe this one in particular because most of the times again depending on your implementation you can use a format like this one when you are receiving a messages through a buffer channel and that buffer channel is the one being used for triggering the guardians let me show you another example which is a little bit similar is using again some of the other concepts that i cover in previous episodes so give me a few seconds so this second example is doing again something similar but the idea is a little bit different for this one what i decided to do is define a buffer as well as a no as a number of channels or rather a number of workers and i decided to define a type just for the sake of defining a type to show you a different way to do this and again is having some sort of a graceful graceful shutdown implementation which again i cover in a different video if you haven't seen that the link in the description the implementation for the graceful shutdown which i didn't mention previously is pretty straightforward it's again listening for a signal in this case i'm on listening for interrupt which is the control c that you use locally for pausing posting other stuff in your program but it could be like a sick term or a secret that are happening that are triggered by docker containers or or not docker containers but rather the docker orchestrator that is running on a stop in your containers in case if you happen to be used in docker again don't worry about that the important bit about this is the actual implementation of the scheduler which in this case i have a new initializer right here which says new scheduler i define the number of workers the buffer and i create a new channel which is used for receiving events and at the same time i created a new signal which is going to be used when we decide to stop the process that is triggered or rather the jobs that are triggered by the scheduler and i will show you that in a moment so keep keep that in mind i'm going to scroll down a little bit and i will show you the implementation of listen for work and listen for work is a little bit similar to what we did before the biggest difference is that now instead of creating goal routines for each one of the messages received i'm creating those goal routines in advance and i'm using another primitive in the language called select that again i previously cover in another video that i put okay it is also part of this series now here is the cool thing about this because i'm creating the goroutines in advance which would be in this example so for each one of the number of workers i'm launching a garudin and each garden is listening for a message that is coming from or in the message channel so every time a message is sent or received or rather is sent because we receive a signal is going to be processed by the order routine so in it to say in a different way we're creating a buffer of how big is going to be the message queue so to speak is not a queue and the other one will be how can we uh create the core things that happen to be processing that data or those values and whatnot so with that being said what is going to happen in the end when the signal for closing is received i'm going to be closing the message channel therefore the open will be false and i will be indicating uh calling done in the weight group this is so all the god of things that were launched previously exit and we can exit cleanly after waiting in the exit method now let's run it and see how this thing works again you need to build it uh to make it receive all the events as usual i'm saying i'm ready right there i'm going to do a kill as term i'm going to be sending oops not there and i'm going to be doing hey i'm processing an event is just processing not doing anything about it because if you remember uh the implementation that i have right here compared to the other one i'm not sleeping i'm just printing out processing so i know that the corrupting receive a message which is it's not printed out correctly so right there it's better display because i made the mistake of pasting the id but what this means is that this zero is actually the worker so depending on what worker is receiving it you will see that it's being processed by different goal routines so that's what i was trying to say now if i do something similar to what i did with the 4 and i say 15 i need to modify the process id i will change it to 36 to 50 which is the one that i have right here but right before i do that i want to remember that i define two arguments in my scheduler the first one will be the number of workers which in this case you will notice that it's from zero to four so five workers and the buffer which indicates how big the buffer channel is so what it means is that how many messages can i queue or accept while the workers are processing the ones that they already receive so if i do 15 uh and i run this again let's run it you will notice that it's processing and because right now i'm not time not timing out that sleeping there's no timer you will notice that they immediately complete and and and and that's kind of the demo i want to show you the important bit about this example and in the other one example as well is that uh there is no way in go to close or cancel around in goroutine if you don't have a way to communicate with that specific core routine so in a different way to refresh this is that because this go routine is expecting a message through the message channel i can stop this go routine because i can close the the message channel and that way the core routine can exit in the example that i had before even if i exit there is no way for me to communicate to those go routines that already started so what is going to happen is that these go routines will not exceed cleanly so this is kind of an important differentiation when you are talking about distributed background jobs and when we're trying to do something in memory in process using the primitives or of the language with that being said i'm not saying that this is useless there are use cases for this one for example when we're trying to process something that it's okay if everything if when everything fails or when something fails everything could fail the other one let's say you're trying to process a file uh maybe you're creating different computations from the input from a file or multiple files those could be sent to uh do this kind of background job calculation send those to another maybe another channel use the pipe a pipeline pattern that i described above before and this do something with the results keep that in mind i'm not saying this is useful useless but there are a few use cases where you can implement and use background jobs using channels the sync package and as well obviously core things so let's jump into the conclusions and we'll talk to you in a few seconds so this is another example of using concurrency patterns or rather using all the primitives in the language and implementing something that you can use for your programs background jobs in this pro in this example background process whatever you want to call this thing and again the use case is there it depends on what you're trying to do if you need cancellation uh you maybe need to think of a different way or a different thing maybe distributed q uh or yeah like private mq for example or something similar again there are use cases this may not apply for everything but again it may be useful for you in the end important bit that i want to again emphasize is that using a buffer channel is basically the key what you need for implementing this kind of this kind of programs and this kind of pattern and again well thank you for watching and i will talk to you next time any comments any questions let me know in the section below see you
Info
Channel: Mario Carrion
Views: 723
Rating: undefined out of 5
Keywords: golang, golang background jobs, golang worker pool, golang background process, golang concurrency patterns, golang design pattern, golang patterns, golang channel, golang select, learn golang concurrency
Id: sKvFXAkQqXY
Channel Id: undefined
Length: 14min 4sec (844 seconds)
Published: Fri Oct 01 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.