Real-Time Object Detection, Tracking, Blurring and Counting using YOLOv8: A Step-by-Step Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi guys in this video we will see how we can blur detected object using yellow V8 and count the number of objects in each frame so here in this video we will be focusing on Counting the total number of objects in each frame as you can see over here we are just pointing the total number of objects in each frame like in this Frame the total number of cars we have the number of trucks we have So currently we are focusing on blurring the detective objects and Counting the number of objects in each frame along with this we are also focusing on object tracking and ID with ID and Trail so in this video tutorial so we will be focusing on object detection with tracking ID plus trails along with learning the detected objects as well as counting the total number of objects in each frame so let's move towards the implementation part okay so this is the GitHub repo of YOLO V8 with deep sort object backing so I have made a video tutorial regarding object tracking previously if you haven't watched it uh go to my YouTube channel and check the playlist over here I will add the link in the description section as well so you just need to open this GitHub repo I will add this link in the description section as well you can check out this link from there as well and you just need to click on Google codelab file and just click open a new tab so so for letting the detected object I will be using the opencv function CV2 dot blur it's only the few lines of code so first of all uh we will be focusing on uh implementing object detection plus tracking ID plus Trace plus blurring the detected object so first of all we need to select the runtime as GPU change runtime as GPU and then need to connect on connect Okay so it's very simple and we will discuss it step by step no worries so meanwhile it's connecting so just wait for a few seconds more and just now run the scripts okay right anyway so here I'm just turning the description repo so this GitHub wrap is being grown over here and just setting the current directory basically currently is set to content so just run setting the current directory as this repository and now installing all the dependencies which include installing all the required libraries required to run this script so we will not face any Library issue because all the libraries will be installed when we run this cell so using YOLO V8 we can perform segmentation and detection and classification but currently in this folk video we will be focusing on object detection and tracking and blurring and Counting the number of objects in each frame so we need to move towards the required directory because we don't need to perform segmentation or classification so our required directory will be paste detect so just copying this path and if you just paste it over here then we will redirect it Auto destructory so as we are implementing object tracking using deep sort so we need to download a deep sort files as well and need to unzip them as well so here I am just downloading the Deep sort files because we are performing object tracking using deep sort you can also choose sort by track non-fair and other multiple algorithms object but as far as I have trusted Android dim sort performs best and it also cover up other issues which we faced in sort algorithm so I have downloaded a deep sort files but you can see over here these are in the zip format I need to unzip them so I'm just unzipping these files over here so the files are unzipped now you can see that this is the Deep sort folder we have over here so now we need to download a sample video for testing from Google Drive so let's download it now let's run this script uh currently the script will only perform detection and tracking it will not uh blood the detected object so let's run this script first and then uh made some amendments in the code and performed implement the object learning as well so it might take few more minutes so that's it [Music] please bear with me so it might take events so let's stop the script and just stop this video and let's wait until it finishes the cell has run completely and so let's see the results as well okay so just move towards this cell so just checking the results or print uh displaying the time of output video Let's see what it gives us um it might take few more seconds so let's wait and see so here we have the output demo video um you can see here let me download it so it look a bit pretty more cool so I'm just playing it let's check it out so now you can see that we have implemented update detection and tracking as well ID plus Trails but uh we have not implemented the blur data we have not learned the detected object so let's do it as well so for this uh just open the predict.pi so before we need uh in code to implement uh learning so first of all what we need to do is to implement uh let the detected object uh we need to know the coordinates of the bonding box which is the rounded directed object like let me explain a more better way foreign Okay so just click over here like this able to write something okay this let's close this and open this in and open with paint for for example okay so for example uh we need to blur the object okay so we need to blur the object inside this uh bonding box so for example we need to blur this call card we need to blur this object inside this bounding box so first of all we need to know the coordinates of this bonding box okay so we need to have the coordinates of this bonding box so we need to know the top like vertex which is this is the top left vertex um writing is very poor I know this this is a top light vertex basically top left vertex and this is the bottom right vertex so this is the basically the uh bottom right vertex okay so we need to know these two points the top left vertex and the bottom right vertex so if we have this uh point so we can know uh where we have the bounding box so we if we know these points then we can blur uh the object inside the bounding box so let's open the predict.pi and store this so here we have the predict dot pi so if we go below over here and just check this out for example let's over here this basically X Y X Y uh this contains the coordinates of the bounding box so basically uh let me open Notepad this X and Y contains these values 0 which represents X1 and X Y X Y one which represents uh y1 similarly if we continue adding this 0 to and this 3 and let's make it X2 Y2 so basically here we get the coordinates of the bonding box for example X Y X Y this can be simplified like this okay so what is X1 y1 let me explain this so so for example just let me clean okay sorry let me some open some another image okay okay open with paint so let's see I have this so basically against the same issues here it is okay so basically this represents X1 y1 and this point represents which is the bottom right vertex this is the bottom right vertex this represent X2 Y 2 so this is X1 y1 and this is X2 Y2 so now here we are getting these two points so now we have the coordinates of the bonding box over here okay so now here we have the coordinates of the bounding box so now uh we need as we have the coordinates of the bonding box now we need to blur the object inside the bounding box to blur the object inside the bonding box uh we first need to cover the image and it only contains the area where we have the bounding box so first of all we have the coordinates of the bonding box so now we'll be so now we need to cut crop the image and it will only contain the area where we have the bounding boxes on the image so now let's do it right from the create a function crop Dash object and write im0 I am 0 basically is the name of my image or you can say that uh video has been passed frame by frame so this is the current frame you can say that I'm 0 in the current frame and you can just write in it X Y X Y one hold on and X Y X Y column three and X Y X Y or number one and X Y X One so what this means basically and now we are just copying the image and uh just you can say and it only contains the area basically where we have the bounding box so in start we can just writing it like this and basically we are just you can simply say that we are just using we are just saying y1 Y 2 with X1 and X 2 okay so this is what we are doing uh currently now let's move ahead as well so that's something you make it easy for you okay so here we should have um X1 okay so now we are creating uh now we'll learn the pop image basically now we have the area of the bonding box so when I now we are just going to uh basically plot this area it's using CV2 dot blur I will write prop Dash objects so now we only have the area of the bonding boxes uh of the detected object now we are just this is basically the blood ratio uh the the as we uh if you keep the value of blood ratio high so it will completely blur it so you can set it as per your requirements I'm just setting it 20 it is from 0 to 100 so this is the blood ratio simply so now we are just starting so now just copy this code from here and just write and just save it and now just check it either it works or is there an issue then we'll crack this out so now let's run the script okay [Music] okay sorry listens take a mistake oops okay just collecting it and now let's start this part of the script as well and that's waiting for that it's working let's find this it might take few more minutes so let's do it again [Music] over here next thing open predict.i what it says let's see [Music] oh I have made a short mistake over here just correct it out and just save this I think I'll just save it out foreign [Music] it might take few more minutes let's see so just running the script okay now it's running so now let's wait uh now let's just pause this video and let's see uh get back when it's updated so another skip casters gets to be run uh let's check the uh demo output video and just write going to be test one MP4 it might takes few minutes and for let's wait and check the output video demo why if you want to just download it from here it might be that you might get uh things done a bit early so let's see so we are so here we have the results for the object blurring let's download this video and this topic so it's downloaded let's open it so cool we are able to blur the reducted objects you can see that the results are quite good so we are very well done so we have done the object learning uh and we have our super implementary object session tracking with ID and trails and object blurring is also implemented uh now let's move towards counting the number of objects in each frame the total number of objects like it can be number of cars or 10 number of trucks are 14 or number of bikes are 15. so let's count the total number of objects in each frame and let's move towards this code part okay for counting the objects in the current frame I will not be writing the company so I am just I will be just uploading over here and then I will explain you because it's not a plenty code and this might take more time and uh the video will get pretty long so just skipping this for you so yes yes [Music] here and my own data um [Music] you just shut down this file so this particular file find already contains the object the ring detected object learning code which we have added previously let me show you as well so this is the add object padding what we have already added over here which I have explained already explained to you previously so now let's discuss the code for the pointing the total number of objects in each frame so here is the code to count the number of objects in each frame so for example here we are getting that sum of detections per class and here we are creating a dictionary by the name founded classes Foundry classes in the dictionary basically which contains the object name and its count the number of time it has appeared in the current frame so found it class is a dictionary which contains the object name and its term the number of time so basically it has appeared in the front frame so here we are we are just calling a function by the name comma in which we are calling uh passing The Foundry classes which contains the object name and its count the number of times it has appeared in the front frame so uh basically found it plus is founded plus is the dictionary which has a key which is the object name and value is the number of uh between value is the basic account the number of times the object has appeared in the current frame okay so let's see the uh count function what it does so in the count function we are just extracting the keys and values from the dictionary and you can see here this is the key value and this is the like the object name this contains the object name and this is the uh count value like the car has been five times or bus has appeared three times in the current frame and here we are just creating a rectangle and just putting that text uh of the number of times object has appeared in the current frame like the cars 13 times or 14 times what is basically this is basically here I'm just making the adjustment like uh the card will appear on the top the number of times the car has appeared in the front frame will appear on the top then uh below we have the bus and then below uh we have the truck and we're below we have the traffic light so these are all the UI adjustments we are I am making over here so now let's just run this out oh straightening this okay let me fix this up and the strongest so just give me a minute okay now let's run this side check the results our results look like let's see so it might take um 100 minutes and let's see I will back at its complete the running process the sun has completely run so let's move ahead and just check it out over here okay just run this out and let's see what results we get on this running cell and this uh see what results we get it might take few minutes so let's let's wait for it to run so it's okay please uh can you so it has successfully compiled so let's check the demo output video so here we have the demo output videos you can see that uh we are able to detect the object plus uh we can also implementing the tracking we see that ID tracking fails and this is a unique ID of each object so we are able to implement object detection tracking plus we are able to blur the detected object and this is the count like you can see that uh count of total number of objects in each frame so the account will change as it moves towards the next frame native account will again change here we have only cars like you can see that no other object we have and so you can see it over here so here we have around you can see that we have three buses one two three so it shows three buses so it shows the total count of each of the object in each frame like you can see that we have uh one truck so it shows one truck we have as well so this is what you've got and results are very good you can see that we are able to implement object detection cranking with ID and Trace plus we are also able to implement object learning and the total count of objects in each frame so I hope you have learned a lot from this video so do share it with your fellows as well and I will be sharing a complete GitHub repository for this project as well in the description section as well so you can check it out and explore it further thanks for watching bye
Info
Channel: Muhammad Moin
Views: 7,253
Rating: undefined out of 5
Keywords:
Id: QWrP77qXEMA
Channel Id: undefined
Length: 24min 31sec (1471 seconds)
Published: Sat Jan 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.