Bull Queue - Redis - Nodejs Javascript - A gentle introduction to Bull Queues

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
bull is a node library that implements cues on top of redis a queue is a mechanism that allows you to centrally manage and distribute workloads across your application to understand this you need to understand some of the challenges that we run into when running a microservice architecture so generally when you run your app in a microservice architecture you have different node.js applications running across different servers or different regions and let's say you have a piece of work or a task that is running on a particular application and for whatever reason the application crashes or you know the hardware due to of Hardware failure or network failure or whatever reason there is no automatic way to detect that failure and reinstate the task or allocate the task to another processor so this is where a queue system can be really handy in a queue you define a queue and then you have producers which are nothing but applications that add jobs to the queue and then you have consumers which are nothing but processors that take their job and process them so for whatever reason if one of the application goes down then the task is automatically allocated to another processor and the job runs all over again so this is this ensures reliability and this is where accusation system can really come in handy so in this video let's look at how to set up the build queue and we'll look at a few examples and understand what options are offered by bull to control the flow of your task or control the failover options and all that stuff so first we need to start a ready server as I mentioned earlier bull implements a q system on top of redis so first we need to create a radius server I'm going to use Docker to create the Reddit server but if you don't understand Docker maybe you can watch some other video and come back to this one Docker makes it really convenient to spin up and bring down the service so we can create a Reddit server using this command Docker run command and I'm just going to quickly explain this command so first we specify the port forwarding so red is by default runs on 6379 port and we're going to run this run redis on the same port on our machine and then we specify the name for the container and we want to run this in detached mode so we provide hyphen D flag and then we provide the image name that we use to create this container and then we say append only yes that is used to purchase data will we can ignore apprent only for now and then we just need to provide the password for the server which is one two three four five six if you hit enter Then I need to given my perfect so it created this container and this is the container ID now that we have ready server running let's get on with the bull queue implementation so here I have a new node.js project and I'm going to first install the two dependencies which is bull bull and Dot EnV to load environment variables once these two are installed I'm going to create a DOT EnV file with the redis details which is a host code and password generally it's recommended to pass sensitive information through environment variables and not recommended to hard code in the code so I'm passing these values through environment variable okay and I'm also going to create a git Repository and I'm going to create a dedicated Branch for each demonstration for tips so for demo one you'll have a branch and for demo 2 you'll have a different branch and so on and so forth so for the first demonstration I'm going to create demo one branch you'll see at the bottom left corner from Master now it switches to git checkout slash B demo and what it does is if demo one branch doesn't exist it creates that new branch and also checks out that Branch so if you hit enter you will see that on the bottom left corner change to demo one okay I'm also going to create index.js file to write the code all right in the first example we're going to look at a very basic demonstration so we'll Implement a very simple build queue first we need to import Bull and Dot EnV and then we need to run.me.config to load the environment variables and then we create radius options with the redis details the bull queue is going to get connected to the redis server so we create this redis options and then we Define the Define the queue and in this case I'm just calling the queue as Burger queue to again to keep it simple and then we pass the redis options and burger queue is nothing but an object created using the class bull which we imported earlier and then we need to register the processor so registering processor is nothing but registering the consumer that we discussed earlier so this function is going to get executed when a new job is added to the Burger queue all right and this is a very straightforward function you just logging some data to the console so we first log preparing the burger and after four seconds we just log that burger is ready and we need to call the done function to indicate that this process is finished and finally let's add a new job to the queue and we can do that by calling the add method and I'm also passing in some information to this shop which is the bun cheese and some toppings again this is a really basic example so when we run this this piece of code adds this job to the Burger queue and this function gets executed so let's run it first let me save it and if I run it you can see it's preparing the burger and after four seconds Burger is ready so it works fine in the previous demonstration although the job worked fine wouldn't it be nice if you can visualize the jobs in the queue and their status and you know how many jobs are in the queue and all that stuff so there is a way to visualize it we can use billboard library to achieve that and this is a free repository that you can use and it not only offers a dashboard it does offer some capability to reinstate failed jobs and and a few other stuff that we'll discuss later in this video first let's see how to implement the dashboard using this billboard Library here is how you can implement the bull dashboard using billboard Library and I created a new node.js project here and I'm not going to get too much into the details so we just installed the boulevard packages dependencies and in the index.js file again we just create redis options and we just pass the list of queues that we want to visualize on the bull dashboard as an array and then we create an Express server to create a route that we can open in the browser and visualize the billboard so I am running this project in a separate window and you get this URL that you can open directly in the browser to visualize the bull queue so this is how it looks so when you open the link you will see bull dashboard on the top and you will see the list of queues available as of now we just have one queue which is burger and these are the different possible statuses of a job which which we'll discuss later when we discuss about the life cycle of of the job but yeah so we have only one job that we ran earlier which is in completed status and if you click on Burger you will see different tabs over here which indicates you know the jobs that are present in each of these statuses so we only have one job that we completed earlier and we have the data that we passed and we have options and logs so again we'll get into these as we dwell deeper in this video but yeah so we can use bull dashboard to to visualize these jobs I will share this code in the description box and you can explore this repository and figure out how this is implemented but it's really straightforward it's just 40 lines of code and it's really simple now let's move on to the second example before that first let's commit these changes and create new branch that's it we are on the second branch I've added dashboard to the right so that it's easy to visualize when we run the job in this example let's look at how we can add logs that we can visualize from the bull dashboard and also how we can update the progress of the job when we run it and visualize from the Google dashboard so first I need to modify this processor the consumer function the set timeout although it works it's a really clunky way to implement so I'm gonna convert this into an async function and promise if I set timeout to create a sleep function that I can use in async function so let's import promisify from util and then create sleep perfect I'm going to create five steps to create the burger so these are the five steps Grill the Pate toast the Buns add toppings and I just want to add some more complexity to this job so after each step we're going to await for a second before we move on to the next step okay and after all the steps are finished we call the done function and let's wrap this in try catch block because it's an async function and if there is an error you can catch it by calling the done function with error that's it so let's run it we should see an active job here awesome so if you look at the logs you can see uh okay so now the job is completed and you can see that you know the log statements are displayed to the console displayed in in the bull dashboard and we use payload.log to log the statements to the build dashboard and we can use payload.progress to indicate the progress of the job again it I think it finished a little too quickly I wanted to see the progress being increased from 20 to 40 and on and so forth so I'll change the time period to five seconds for each uh step and finally once everything is finished awesome so let's let's run this again if we go to the active tab we should see a new job we go to the logs you can see the progress is 20 and after five seconds you will see the next log uh to the con to the dashboard which is AD toppings perfect again by default it it's it's paused that's why you didn't see it but yeah you can see after assemble it's 80 and once everything is finished Burger is ready it's again it's once it's finished it moves on to the completed queue completed uh status and if you look at the logs you will see all the logs over here and the progress is hundred percent so this is how you can add logs that you can visualize from the world dashboard and add progress that helps you troubleshoot or identify the progress of jobs now let's look at a job's life cycle these are the different statuses that a job can hold during its life cycle so when a job is newly added it either goes into a weight status or delayed status depending on whether it's invoked immediately or if the job is scheduled at some point in the future yes you can also schedule a job in the future and you don't have to invoke it right away and from the weight status if you have some kind of throttling implemented then the job stays in the weight status for a while and then goes on to active status and if there is no throttling the job immediately moves from weight to active status and from active status once the job is completed processing it goes it moves on to completed status and if there is an error it moves on to the failed status and if you have some kind of retry options set which we'll discuss later in this video if the retry options are specified the job tries to retry again and again it goes into the delayed status and delay to active and if if the retry is successful it moves on to the completed status if not again it repeats this cycle and once the job is completed it it's finished so these are the four five different states the job can hold weight delayed active completed or filled all right now let's move on to the next example and before that let's commit these changes and create a new branch example let's look at different options that we can pass when defining the queue so by default we have passed redis options but there are some other options that we can pass to customize how the queue is executed so I'm going to change this to Q options and pass in over here let's look at the documentation so this is the official documentation and these are the different options that we can pass when creating when defining the queue and by default we need to pass the radius options because bull queue connects with redis or the redis options are mandatory but there are some other options that we can pass like limiter and limiter is used to throttle or rate limit your function execution and prefix so all the job seters are stored on redis using a prefix which is bull by default but if you want to change it you can use the prefix option and Default job options advanced settings and all that stuff so again I haven't used all these settings but in this example we're going to look at rate limiter and and see how we can throttle the functions but if you want to know more about all the possible options I'll leave a link to this in the description box and you can explore all these options so if you want to throttle the function you can pass in limit option and limit takes in two keys Max and duration duration is specified in milliseconds so if we specify Max as one and duration is 5000 that means this queue only executes one job every five seconds and let's change this this to 10 to clearly visualize how this works so with this setting now the queue should only process one job every 10 seconds and these keep in mind that these Q options are only passed when defining the queue all right and then instead of adding a single job to the queue let's add 10 jobs to the queue so we are defining 10 jobs again with the same payload and adding all the shops to the Q at once so let's run it and as you can see in the queue it's running the first job and all the other jobs are currently in Waiting status and if you look at the job that's executed that's being that's running again it takes five seconds to run it has five steps to execute and yeah let's wait for it eighty percent finished so now the burger is finished completed now then the next one takes over so so this is how we can use limit options to throttle the function execution in this example let's look at what happens when the job processing fails as always let's commit these changes and create a new branch let's give step two a 25 chance of failure so 25 of the time the toast gets burned and the job fails so what I want to see is what happens when the job fails at step two and you know how it would reflect in the logs so we need to specify the retry options we can specify rater options when we add the job to the queue there are several other options which we'll discuss in the future examples but for now let's put attempts 3 that means the job tries to retry three times when there is an error okay now let's run it again I've added 10 jobs to the queue and I also deleted the existing job so that we can start fresh so let's run this so the third job failed on the first attempt and if you look at the logs whenever the job fails we can see the logs start from the beginning right so that means whenever there's an error the job tries to run again the function runs again from the beginning and all the steps are executed again right so the function what I'm trying to say is whenever there's an error the job runs again so it's important that you design your functions in an item potent way so what I mean by item potent is that no matter how many times we run the job it should produce the same side effects and the job can fail at step two step three or at any step but when it's executing again it should produce the same result so the job failing at a particular step shouldn't have any side effects as always before moving on to the next example let's commit these changes and create a new branch in this example let's look at some events that bulkyo offers to listen to in your node application and take actions if required for example you want to know when the job is completed or when the job has failed so you can listen to these events and get notified or take some action in your node application so I'm going to keep this as it is I'm going to put 25 failure as we did in the previous example and while adding jobs to the queue I'm gonna add another option which is job ID now we can add events we can add a listener to failed event and completed event and see which Burgers have completed and which ones have failed so here I added an event when the job is completed just you know log into the console that the burger ID with the ID has completed and I'm gonna repeat this for failed event and just log to the console that this burger failed and I'm going to add 10 jobs as always and let's run this and see what happens you can see per government failed so Burger 3 completed successfully so yeah I just wanted to demonstrate that you have these events that you can listen to and take action if required or you know you can just log to the console or if you have any requirement where you need to listen to these events you can do so here are the list of events again I'll leave a link to this in the description box you can check out all these different events and bull cue supports event listeners to all these events now one thing to keep in mind is that you can either listen to a local event or a global event so what I mean by local event is in the previous example we were having listening to a processor that is registered on that same node.js application but in a microservice architecture you could have 10 different instances or you know n number of instances of your application running and if you have multiple processors that are registered to the same queue and you want to listen to all those processors then you can set this Global Flagship if you set to Global flag then you're listening to all the processors that are registered to this queue no matter which application generates them so if you want to just listen locally you can just listen to the plane event but if you want to listen globally just append Global just prefix Global before the event in this example let's see how to add a recurring job and we look at a few other options that we can pass while adding the job to the queue and as always let's commit these changes and create a new branch first I'm going to delete these event listeners and then I'm just going to add a single job to the queue because we're going to add a recurring job right so we can just add a single job and we can pass the recurring options while adding the job using the option repeat and inside repeat it takes a key called cron and inside this Crown we can specify the schedule in KRON job syntax so this is how we specify the syntax of a Cron job so let's look at some examples so this is the Cron job syntax so if you want to add a job that runs every 10 seconds right so this is how you do it so this job runs every 10 seconds if you want to run the job every minute this is how you do it this runs every minute on the first second but if you want to run the job every hour on 15th minute this is how you do it but if you want to run the job every five minutes then this is how you do it so again you can check out the syntax it's really straightforward let's add a job to the queue that runs every minute on let's say the 10 second let's run this and see how this works if you look at the queue you can see by default it stays in the delayed status and when it's triggered it goes back to active or waiting status so this is set to trigger at the sixth minute 10th second so let's wait for that awesome so when it's triggered it moves onto the active status and now the job is executed you can see the first it it failed for two attempts and if it fails for the third attempt which it did it gets pushed to the failed status and the failed status you can either clean the jobs or retry it again we have set to fail we've set a 20 chance of 25 chance of failure right so that is the reason it got you know pushed to the failed status so you can delete this and again it's still you have a job sitting in the delayed status again the next job will be triggered on the seventh minute 10 second time and once this is pushed to the queue another job is triggered at the eighth minute 10 seconds so this is how you add a recurring job to the queue so these are all the options that you can pass while adding the job to the queue so let's look at these really quick first one is a priority so again if you if you're adding jobs to the queue you can set a priority to the job the highest one the smaller the value the higher the Precedence so if you want you can add jobs with let's say priority 10 by default and if you want certain jobs to take higher precedence you can set the priority to one or something lower than 10. so that's how priority works and delay you can add a delay you know it's it's pretty self-explanator you can add a delay to the job before it gets executed and attempts we already looked at attempts this indicates how many times the job is supposed to retry when there's an error and repeat we also looked at repeat this is used to schedule a recurring job and back off again so back off is the amount of time the job needs to wait before it retries again and the next one is lifo so generally when the jobs are the order of execution is lifo by default so the order of execution is last and first out but if you want to change it to fifo you can set it to false so if you set it to false then the job that got added last in the queue gets executed first so again it's up to you it depends on your requirement and we also looked at Job ID again we looked at Job ID when we looked at events again there are remove uncompleted yeah this is to delete the job from the queue when it's completed remove and fail again it's the same thing you want to delete the job when it's failed I think we already looked at a lot of these and the others are also pretty self-explanatory so again you can check out this documentation I'll leave a link for this in the description box that's it for this video let me know in the comment section if I had missed something and if you want me to cover some other topic feel free to share your feedback thank you bye
Info
Channel: DeKay
Views: 9,429
Rating: undefined out of 5
Keywords: kafka, jobs, bull js, bull.js, redis queue, bull, background job, redis queue commands, redis queue delay, redis queue dashboard, redis queue cluster, redis queue fifo, redis for queue, redis queue gui, redis queue node js, redis queue monitoring, redis workers, queue email, redis docker, priority queue, delayed queue, repeatable queue
Id: FFrPE0vr4Dw
Channel Id: undefined
Length: 25min 3sec (1503 seconds)
Published: Sat Aug 26 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.