Real-Time Object Detection and Tracking using YOLOv8 on Custom Dataset: Complete Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
bye guys in this lecture we will see how we can Implement object detection tracking using yellow V8 on any custom data set so the data set I will be using for this project uh is uh is the following data set that is available publicly on roboflow the data set contains the drone images of cars trucks motorcycles pickups so it's a multi-class data sets which contain around around 10 plus classes which include the images of vector plane pickup motorcycle camping cars so it's a huge data set and it contains around 4680 images so this is a quite good data set and so I will be using this data set so these it contains multicast data set it is and um so we will be using this data set for this project as far as the health check is concerned let us check it as well so this basic class is called class is overrepresented while all other classes are underrepresented so while choosing any data set for the project I think you should be keep in mind the data should set should be balanced so this data set is not balanced but we are just using the status for this tutorial so it's fine but if you are implementing a project or for some project or some or doing for some freelancing project you should make sure the data set must be balanced so like it's overrepresented by all other classes are underrepresented like it contains car annotations are around 8 389 while other penetrations are very low so you should keep in mind the data set should be balanced like out of twenty thousand five fifty four annotations uh eight thousand annotations are of par so like you can see that around four to five annotations are per image like for example we have 4680 images so around uh each image just contains four annotations and in out of four annotations like uh maximum times all the notations belongs to the card class so the data set is not balanced so please make sure while implementing any project or picking any publicly available data set from Robo flow or from Kegel or from any other side please make sure the data set must be balanced so we will be using this data set for this tutorial let's move on further so this is the collab notebook I will be explaining the whole process flow in this notebook so please keep in touch so I um so please watch the complete tutorial so you don't miss anything so this is the GitHub repo uh which we'll be using for this project which is Yoda V8 with Des plot object tracking we will be using this GitHub repo uh for this project so here the complete steps to run control code step by steps are provided like steps to run code and we will be following the same steps in our collab notebook and like which are mentioned here like first we need to clone the repository and then we need to go to the Clone folder then we need to install all the dependencies which include all the required libraries then we need to redirect to that detect folder and then as we need to implement object tracking using deep sort so we need the Deep sort files to implement object tracking uh basically deep sort is a state of the art multi object tracking algorithm uh which we'll be using for this project and then we can download some Sample video for testing from the Google Drive and then uh we will be again then we can run the script and test by lighting python predict.pi so to test our script so basically all these instructions given here are uh can be used if you are going to implement um YOLO V8 on any pre-trained model but uh we are like this is here we are using the pre-trained model uh which is trained on Ms Coco data set but here we are uh using a custom data set and we are just fine tuning Our pre-trained model so we will be fine-tuning the pre-trained model so uh the steps of to run the code uh will increase a bit so let's uh discuss each of the step involves to implement uh Euro V8 with the detection and tracking on any custom data set so first of all uh you need we need to import our first of all before an industry we need to make sure uh that we have select the runtime as GPU so to make sure just go to runtime and check change runtime it is GPU so that's very good and now we need to import all the required libraries first we will put importing this library from IPython dot display import image so we need this Library when we need to display some image into our notebook like like this Library we have we will be using over here let me show you like here I'm just displaying computer Matrix for this I am using the ipython.displayer library to display the confusion Matrix or if you want to print the training validation losses here I am also using this library or to clean in the image revalidation batch predictions I am also using this Library so we use this library to display any image into our collab notebook so first of all let me uh okay I've done this script previously as well so just uh clean all these files over here and just reconnect the okay so let's start from the very start so you can see that we have no previous files over here and let's let's start and I have chosen the runtime as GPU so and then we will go on this GitHub repo so now under my GitHub repo has been cloned now in The Next Step let's check what is our current directory so this PW shows present working directory p means present w means working and beam instructory so my present working directory is this this is the content area so this is my depository which I have done over here speeding over here so I not want to redirect myself into this repository I was born to redirect my notebook to this repository I will just click over here and copy path and just paste it over here okay so so the GitHub repository which I upload I am just setting as my current directory CD stands for current directory so in the next step I need to install all the dependencies so use basically we need to install all the required liabilities uh which are required uh to run this script so here we are installing all the quiet libraries so so if you don't run this you might face some error like the Hydra libraries installed or numpy library is installed so it's necessary that you run this cell so that you don't face any Library issue like other following libraries installed or like numpy Matlock finders library is not installed so if you run this cell you will not face such issue anymore so I am just able to run this cell and now basically here basically yellow V8 repository contains uh Force segmentation detection as well as uh the code for classification as well but uh here in this project we are only performing uh detection with tracking on custom data set so let me read me let me move towards the required folder of detection because we are not performing segmentation or tracking so just go over here V8 and this is a folder which we require so just copy the path and just paste it over here so now as I am just implementing uh object tracking and detection on custom data set so just we need to download the data set or import the data set from Robo flows to do this you just need to go to I need to just sign with my account so let me show you just here so I have just signed in with my account like you can say that I have just logged in my account previously I have not login so just log in and just click on download this data set and you can just select a format so I'm just selecting your V5 Fighters format and that's fine I show download code continue and just copy this all this from here and just paste this over here and just now run this cell so it might take one to two minutes into a download data set on your site so let's view it for one to two minutes so the address that you can see is downloading over here and here this is the basic related set we are downloading over here it will automatically unzip when the data set completes loading in the form of zip format so when it's completely downloaded in the form of zip format it will automatically unzip as well so let me so let's wait for few minutes okay so you can see that uh it's automatically unzipped now we have the train test and validation folders okay so now we have downloaded the data set and we are in the detection folder so we have the data set and now we basically need to implement object tracking so basically to implement object tracking here we are using deep sort deep sort is a state of that art object tracking algorithm so we need to download the Deep sort files into our directory as well so just Implement Resort tracking we need to download the required files I'm just downloading over here so in currently in the zip format like you can see here here we have the Deep sort files but it's in the zip format so we need to unzip them so here I'm just unzipping the digital files so now we need to train our custom model so I have already trained the model on 450 epochs so you can see over here I have already trained it but if I start training over here it might take one to two hours long so I'm not just not training it I have saved the weights file so I will just import the weight files over here okay so these are all the files which we usually have in our print folders we are just checking which files we have in our train folder so this is the computer Matrix so confusion Matrix is the chart that shows basically how our model handles different classes so let me show you what confusion Matrix explain basically let me set this so okay just letting it update the way I want a bit more okay just save this image instead of videos download and just explain it so just press the images download it as low as this so here we have the confusion Matrix so what does confusion Matrix like that I've already told you basically confusion Matrix is the chart that shows us how our model handles different classes now let's for example let's consider the case of camping car so for example uh for 67 percent of the times it protect it detects correctly our model detects correctly that it's a campy car but for 10 percent of the time we get the bounding box but the camping car is incorrectly classified as the car okay so there is a difference between camping car and car so 67 of the time our model detects correctly that it's a camping car while 10 percent of the time our model Inc we get the bonding box but uh the camping car is incorrectly classified as simple car okay like you can see here while 24 of the time uh our model we are basically our model is unable to detect anything like four to twenty first percent of the time we simply get nothing our model is unable to detect that it's a camping car okay so neither is uh our model is basically unable to detect a 24 percent of time it's a camping car or something so we did not get any bonding box for 21st percent of the times okay while 67 percent of the times our model uh is able to detect correctly that's a camping car while 10 percent of times our model classifies campy car in incorrectly as car while 24 percent of time we our model is unable to detect anything like some background so this is what uh basically we I've just tried to explain the confusion Matrix so here we are get that training and validation losses uh the losses which are important are the Box locks and CLS laws okay so here we are predicting the model prediction on the validation batch so you can see that uh so basically these images uh which we are checking the model prediction on the validation badges so basically these images are not strictly used for training so it so it is always better to have a link and see how our model is behaving and I can see that bottle is behaving quite fine as well so basically I have not run the train script of training over here so I'm just downloading the weights from the Google Drive so just downloading the bits file best bits from the Google Drive and I have downloaded it so here I have validated my custom model as well you can see that mean average Precision when we have a IOU of 50 percent and then we have a IO 50 to 95 percent uh of all the classes so it's just okay minivis Precision for all the classes is just okay it's not uh very good or you consider it's just fine so now we are just uh doing inference with the construction model so I'm just downloading a demo video from my uh Drive Google drive to test how accurately my models perform okay we have a demo video now just run this basically we are taking our best bits we have the best DOT bt5 over here so we are just taking the best ways and just now running so it might take one to two minutes to run this so so it's running so let's play it until it's complete uh the process of running and then we can validate our results so the script has completely successfully run so let's display the demo video we have over here so it might take a few minutes to display the demo video over here so let's wait and until the demo video gets ready to display over here oh our demo video is ready so here we have the recording of course or trucks on a highway road let's download it and check it over here so it's giving Fine Results so you can see that we have the unique ID is assigned to each object plus we are also able to track as well so the results are quite fine as well so this is all so basically this recording is a basically Highway recording of vehicles on a highway road it's flooding is done using drone so that's quite good results so let's test on some other video as well so this is a demo video and some other video as well so just run it as well so it might take for a few minutes more so let's wait until this when process completes so the script is running okay so let's wait until let's see it's run successfully and then we will discuss further things so this collab file will be available on this GitHub repo as well so you can check this YouTube repo and in this GitHub repo uh this file cooler file will also be available as well okay now that's it's again successfully now let's check out this as well what's the name of this video This is test2 okay now let's run and see uh what output we get over here as well so it might take few minutes so let's wait until this over here our demo output video is also ready for another uh test case so let's download it and check it out as well okay so let's play it so cool the results are very fine okay some in some case it can be improved okay so here in this implant case we have Implement object detection tracking usable V8 on custom data set so this is the collab notebook file I will be sharing this file with you as well so to watch this video and if you haven't watched the other videos of Euro E8 with deep sort tracking and how to implement your v81 collab do check it out on my channel as well thank you very much for watching this video
Info
Channel: Muhammad Moin
Views: 10,071
Rating: undefined out of 5
Keywords: yolo, yolov8, object detection, object tracking, machine learning, deep learning, computer vision, artificial intelligence
Id: FPH58P89p1E
Channel Id: undefined
Length: 19min 47sec (1187 seconds)
Published: Sun Jan 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.