Object Tracking using YOLOv8 on Custom Dataset

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone this is arohi and welcome to my channel so guys in this video I'll show you how to perform object tracking on custom data set using YOLO V8 and different object tracking algorithms so whenever we say tracking that means there are uh you know series of steps which you have to follow to make your custom object tracker okay so first thing is detection so what we does is first we detect the object in a frame and then we keep on knowing the position of that object in the next frames and the uniquely identify them by providing a different ID to them and we keep on tracking them until that object leaves the frame okay so for this whole functionality for this whole work which I've just told you what we does is there are three steps which we follow first is object detection object detection what we are going to do today first we will create our object detector to detect our custom object and then second uh thing is to create create a feature extractor model why we need this feature extractor model because when we use Kelvin filter a Calvin filter track the object on the basis of velocity and the motion of that object Okay so with Kalman filter we Face sometimes we Face our problems of you know if occlusion is there or ID switching problem right these kind of problems uh we cannot solve this these problems fully with Calvin filter okay so for that we will use a feature extractor also this feature extractor will help us knowing the appearance of that um object okay so we need a feature extractor model which will help us knowing the appearance of the object you will use Kalman filter because on the basis of velocity and motion of that object will track the object will use object detection model to detect the object first first we detect the object and then we use feature extractor and Kalman filter on it and then our tracker will be ready okay so let's start um for today's class so initially let's start with the data set it first let's see which custom class we are working upon and how to prepare that data set so guys I'm starting with the feature extractor first I'll show you how to create a data set for your feature extractor to work on custom clubs so let's start so for that okay here you can see I have two folders strain and Val folder okay inside a train folder you can see I have these different folder Port one board two board three so what I'm doing is for today's class I'm working on a single custom class and that class is uh boats we are going to track the boats okay so let's open the first folder and have different images of boards and every image okay let me open it so I have cropped the image from the area where that object is actually present so this is the image with crop okay now let me show you how it looked before cropping okay let it get open so I'm just showing you the example this is a different image so you can see this is your image before cropping this is the image of a board but you need to crop the image from the area where your object is actually present okay so this is what I have done here for every class here you can see I have croaked the image where the object is actually present and guys you can also see here one thing to note here is see when we work on object detection model how we prepare data set we just randomly collect suppose you want to create a object detection model which can detect boots what you will do you will collect the random images from internet and you will Mark the area where your objective is actually present and there is no need that all the images of your data set to be related okay we just collect random images we Mark the areas we annotate the area where our image is and that's what we do okay but over here you will see that inside one folder I have images which belongs to one board class so what you can do is just download the video of the class which you want to detect suppose you want to work on a um table class okay so just download a table video of a table from internet and um save the different frames of that table video in one folder and give it a um one ID so in my case both 0 is a class name label name for the board frames which are present in this folder and you have to crop them now let's open the another folder now let's see what is there in board 2. if you'll open board 2 here you can see I have a different board but all the images in this folder belongs to a single boat only okay and this is how you need to prepare your data set okay so on this data set we are going to train our feature extractor and in the same way guys you can see I have data of 53 different boots okay I have data of 53 different boards and I have cropped the area where the object is actually present so this is how you have to prepare the data and in the same way let's open a Val folder now guys in the valve folder also you need the exact same classes which are present in your train folder so in your train folder you can see these kind of folders We have and exactly 53 different board classes were there in my train folder so in the same way 53 different classes are there in my validation folder also and when you'll open this folder now this time okay so this is a mistake here you can see this time in validation photo folder we have less number of frames while training we are using more uh frames because we want to train our algorithm to know the features of it and over here this data is for validation so I'm using less number of frames okay so this is how you have to prepare your data okay so this data set will go to the feature extractor model now guys for feature extraction what you can do is see when whenever you have studied about image classification so over there you know that we extract the features and then we finally we have a dense layer and after that we have um over there we perform the classification okay so you can create your own feature extractor extractor and that feature extractor model should be a pie torch model okay so you can take help from the pi torch official website there they have shown how to create your feature extractor model and how to fine tune your model so from there you can see that that how to use your own feature extractor okay I'll just okay show you uh way how I implemented it okay so in this folder I have this feature extractor so for initially I'm importing all the required modules and then here I'm loading the data set okay I'm loading the data set i'm performing data augmentation on it I'm normalizing those images because that will help uh you know you will get better results with it so that's what I'm doing so this is for my train folder this is for my validation folder this is the data set directory now my data set directory name is dataset so let me show you this see this is the code I have and this is the dataset folder under this dataset folder I have a train and validation folder inside train I have the folders to train the algorithm and inside validation I have the validation images Okay so here so right once your data set is prepared I am training the model this is the function which I have created for training the model okay this function will train your algorithm and guys again this function I picked this function from the pytouch official website okay for training our algorithm and then over here I'm using that model as a feature extractor so what you need to do is you just need to take care of this this should be false okay so if you want to use your model as a feature extractor then this should be a false over here okay and then here I'm training my algorithm and I'm training the algorithm for 25 epochs and my model training is going on this variable and after that once training is done I am saving that model see model con for uh variable is carrying the training details so I'm saving it with this name so now we have a feature extractor model this and let's visualize some predictions let's see if our model is trained properly or not okay so for that this function I have created this will show us the predictions okay so yeah over here you can see this feature is uh a part of board 51 this is the prediction and this feature is a part of board five so let's see let's open the boat five and see if we have these kind of uh image features in that okay so both five let's open that data set train board five and yes you can see see this is the boat five and the feature which we have is this okay so this is look this this is a part of board five all right so in the same way let's see both 30 this is uh belongs to both 30 as per our model so let's open both 30 and see if we have these kind of features uh in that so let's open both 30 and yes you can see we have this board and the feature we have is this so yes this belongs to that class so uh feature extractor model is ready right it's working fine so let's do the next functionality one model is ready now this model will we will provide this model to our tracking algorithm this algorithm this uh feature extractor will help the tracker to know the appearance of the object okay so one one part is done Okay so here you have a feature extractor model okay now the next thing is object detection we need to build our object detector on our custom data set now and our custom classes both okay so for that we are using YOLO V8 to train our algorithm okay so let me open the notebook over here so this is the third part board detection let's open this Jupiter notebook see guys it's very simple what I've done is just um import so for this you need a YOLO V8 and you can install YOLO V8 but by just writing pip install ultrolytics and your YOLO V8 environment will be ready after that you just need to run this and your training will be done so what I'm doing is importing the YOLO and uncommented what I'm doing is model dot train I want to train the model and I'm providing this data.yaml now what is this data dot yml let's see that so this is my data.yml file so number of classes are one and this is the class name vote underscore ship underscore yacht okay so I have used the images of boards uh ship and Yacht and I considered it as a one class and and this is how your data.yml file will look like and now for let me show you the data set this is the data set for training the object detection model and inside it we have a train and valid folder inside train we have images and an inside labels we have a annotation files which are in text format and guys YOLO how YOLO accepts a data set for every image we have a different text file and in that text file we have The annotation detail of the image Okay so inside images you can see we have all the images so I have used 634 different images to train the custom object detection model okay so this is my data set and I have shown you my data set data.yml file also so once your training done you will see okay inside this you will have the details and inside this weights you will have this best DOT pitting so this model will be given to the object tracker to detect the objects because whenever we want to track the object what we does we detect the object and we for detecting the object we are using YOLO V8 this model will be helping the tracker model for detecting the objects and then we have a feature extractor model that model will help the tracker to know the appearance of the object okay so that part done and the object detection part is also done now and now okay if you want to see the output let me show you so so this is how it is detecting the board shape and your class okay so our YOLO V8 model is ready now now next part is tracking so guys now what first for tracking what we will do I am using uh I'm I'm picking the code from this GitHub repo so clone this GitHub repo okay so guys till now we have done object detection object detection code you will get those are just four lines just do install ultralytics and these four lines you can visit the official GitHub wrapper of YOLO V8 and you can get the code from here okay and the feature extractor code I have told you that you can go to the pi torch official site and from there you can get the feature extractor code and the third part is this object tracking object tracking code you can get from here okay clone this GitHub repo how to use it how to do you perform it just these three lines will um after executing these three lines your object tracking environment will be ready okay clone the GitHub repo after cloning the GitHub wrapper you will get a YOLO V8 underscore tracking folder okay let me show you that inside this I'm doing so here I cloned the GitHub repo and I have this folder okay inside this folder you will have all the tracking files okay and what you need is after that install the dependencies like this okay now let's run the tracker now to run the tracker the command is pythontrack.py then you need to provide a yellow weights let me show you from here uh here you can see that tracking here than this okay so what I have done python tractor py provide the source I want to perform the tracking on this video okay if you'll provide Source 0 it will open a webcam it will provide R if you want to provide a rtsp stream you can provide here then we are using a yellow weights now in Yolo weights remember we have created a best DOT PT file so I have renamed it best underscore board okay so this is the model which we got from YOLO V8 object detection and which we which was trained on both and this weight will be a model that feature extractor remember we have created this feature extractor so we are providing the path of that and after that we are writing this save video If you will not write this then your tracking results will not be stored so save video flag will save the video okay with the trackings now let's see that results here you can see it runs track and here let's open this one so you can see both Shipyard and we are getting the percentage over here this is with the detection and they have ID this both have uh ship have one and over here we have three so these are the unique IDs provided by our tracker okay so guys this is how you can build your own custom object tracker I have done till now is see guys inside this track dot py there are these kind of three different tracking methods tracking method strong sort okay OC sort and buy track okay so now let's go to our code okay Okay so here are good okay let's open this track dot py file here you can see yeah here by tracking method I have used over here is strong sort now if you want to use different tracking method like this suppose you want to use OC sort or byte sort you can use this flag tracking method here you can let's put here tracking method space and then write the tracking method you want to use so if you want to use a buy track just write by track here if you want to use strong sort then you can use that and if you want to use this tracking method you can use that so these are the different tracking method you will you can use and you can under this tracking sources you can see that Source 0 is for web camera and image video and if you want to work on rtsp streams right so all these sources you can provide like this so that's it guys this is how this object tracking on custom data set example works okay and if this video is helpful and please like share and subscribe my channel thank you for watching
Info
Channel: Code With Aarohi
Views: 11,923
Rating: undefined out of 5
Keywords: YOLOv8, Object Tracking, deepSORT, ByteTrack, StrongSORT, Object Detection, Tracking, Detection, python, pytorch, AI, Artificial Intelligence, Deep Learning, objectdetection, objecttracking
Id: 3wUABl3KInQ
Channel Id: undefined
Length: 21min 1sec (1261 seconds)
Published: Fri Jan 27 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.