Object Tracking using YOLOv8 on Custom Dataset - Google Colab

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone this is arohi and welcome to my channel so guys in this video I'll show you how to perform object tracking on a custom data set using YOLO V8 and different object tracking algorithms and I'm going to implement my today's example on Google column so yeah here you can see I have this folder YOLO V8 underscore tracking folder in my Google collab and inside that I have these three folders extractor so what we are going to do is first see when whenever we talk about object tracking there are series of steps which you have to follow first is uh object detection will detect the object in our frame and then in the next frames we are going to track that object by knowing the velocity and the motion of that object right and we will keep on tracking that object until it leaves the frame okay and then we will use one more thing which is feature extractor why we need a feature extractor because feature extractor will help to know the appearance of the object okay so with the help of this we in tracking we are going to use Kelvin filter Kelvin filter uh track the object on the basis of the velocity and the motion of the object and with this we are using a feature extractor also feature extractor will focus on the appearance of the object okay so at today's work is divided into three parts first will be object detection on a custom class and then we'll create a feature extractor of that custom class and then we'll provide both these models to our tracker and then we'll make a custom tracker so let's begin so this folder the first one this folder is having a object detection code and I'm doing object detection using yellow V8 let's open it and see it so here you can see this board's underscore detection underscore data set so my custom classes boards based uh what we are working on one class and that class is both and now I'm building a data set okay so let's see inside it you can see we have a train and valid folder inside train we have images and labels so images folder have all the images which are required okay so these are the images of different boats okay and then let's go to the labels inside label you will see all the annotation files which are in text format and every uh image will have a separate annotation file and that will be in a text format and the file will look like this let's open a file C zero class we are having only one class and this is The annotation detail bounding box coordinates of that um class okay so this is how you have to prepare your custom data set and for preparing your custom data set you can use label IMG tool right and once your data set is prepared okay once your data set is prepared we need a data.yaml file this is my data dot yml file so what we are telling here is that over here this is the path of the train images and my train images are at this location and the validation images are at this location number of classes are one and this is the name of that class okay board underscore ship underscore yacht okay these this is the name of the class so using this data.yml file we are going to train our YOLO V8 algorithm so this is my code here simply I am just connecting to the drive mounting the drive and then using this command you can install the YOLO V8 environment uh on your collab okay and after that you just need to write these four lines to run your uh train your custom models right so here we are providing the data.yml file which I've just shown you this data.yml file okay and here we are giving the path of the custom data set so we are providing that file and I'm training uh the algorithm so actually guys I have trained this algorithm uh uh earlier also so over here I'm just showing you the functionality so that's why I've written Epoch equals to two otherwise I have trained this model for 25 epochs okay so once your training done you will get the training details everything your confusion Matrix or or your weights trained weights right your results those are in this train folder which is in detect and which is in runs folder okay so let's see once this is the runs folder detect trade and here inside the weights you will get the weights okay so this is the weight file we are going to provide the tracker to detect the custom object which is our boot okay so one one part is done now the second part is feature extractor feature extractor will help the algorithm to know the appearance of the object so let's go to that now okay let's see okay so if you want to before moving to that part if you want to perform the testing if you want to see the predictions then you can write this Command right task calls to detect because we are running a detection model and mode equals to credit because we want to perform the predictions and here we have given the path of the model which we have trained just now and this is the source uh I have a folder with the name of test images and in that folder there are five or six different images so I want to perform the detection on that and then I'm displaying the result so this is one of the image okay predicted image so you can see that our algorithm is working fine okay so now let's move to the next part which is [Music] yeah this feature extractor we have done this part now we are at feature extractor so uh guys for feature extractor let's see the data set first so inside this dataset folder I have created my custom data set uh on a board class let's open that inside data we have train and Val folder valve folder have a data validation data and train folder have images like this you will have subfolders you can see we have 53 different folders and both zero board one board two so all these folders have images of boards but one folder have images of one boat only so let's open let's see this boat here you can see the images we have here right all these images belongs to one board okay so next when we work on object detection what we does we just randomly collect data from internet and those images are not related with each other we just randomly collect the data and then we use some annotation tool to annotate the areas right so but in this feature extractor you have to be very careful that in one folder you are going to have images which belong to that particular um you know like these this same boat is here right so how you can prepare this data set so for example you want to prepare a data set on keyboards so you will let's take another example you want to collect a data set of tables okay so you will collect videos of different tables from internet and then in one folder you will have the frames of one video and in another folder you will have a frames from another video this is how you have to prepare the data set and one more thing see you um once you download the videos okay let me show you with the example let's open some example here so I have that example on my PC so I'll show you from there so let's first see the video for example this is a video you have okay this video you downloaded from internet once you have this video you have to extract the frames so you will get frames like this you see these kind of frames you'll get okay so these are the frames of one video you can see that both this and we have frames related to that video only now you can see that object is here in this area your object is here rest of the image area is a waste for us so one you need to crop the area of the image where the object is present okay this is what I've done over here you can see I have cropped the images right extra area I have deleted that area I have just cropped the area where our object is actually present this is what you need to do this is how you prepare your data set for feature extractor okay so now let's open another folder in this folder in this folder you will have the frames related to one boat or ship okay so this is how you need to prepare your data set for feature extractor in the same way now let's open Val folder inside Val folder you will have less number of frames okay so these in train folder inside train we have subfolders and the folder name is the class name of the frames which are inside that subfolder okay so in in train we have more frames right but in validation we have less frames because we are validating our data we have you we are validating our algorithm on validation data but for training we need more data so that's why we had more number of frames in our training data set okay and don't forget to crop otherwise you will not get good results okay now our data set is ready now once you have the data set now you need to create a feature extractor and you can create a feature extractor using pi torch okay and you can visit the pytorch official website over there they have mentioned how to uh create a feature extractor model and you can follow those steps to create your own feature extractor so now this is after training my feature extractor this is the model I have OKAY model underscore feature extractor waits.pt so now I have a feature extractor model and I have a detection model now I'll put I'll provide both the models to the tracker and we'll see how tracker works so now my tracking work is in this folder let's open that okay so this is the tracking collab so yeah here you can see see steps to run tracking is very easy you need to clone the GitHub repo this GitHub repo we are cloning this GitHub repo okay we are cloning the GitHub repo and after cloning you will get this folder okay so we are entering in that folder you can see here YOLO V8 underscore tracking this is what you'll get after cloning the GitHub repo and then we are entering in this folder once you enter in that folder we are installing the requirements these requirements are required to run your object tracker okay so after that this is the command after that you just need to run this command and tracking will work and one more thing guys what I have done is I have created one folder inside this YOLO V8 underscore tracking you'll get everything but this weights folder I have created myself inside this weight folder I have placed both the models this is the object detection model I renamed it to best underscore board 2 just to recognize um you recognize like this is a boat model I have created for object detection using yellow V8 and this is the future extractor model which we have just I've shown you how to do it okay so both the these things are in weights folder so python chapter py Source on this video I want to perform the tracking and YOLO weights YOLO weights means object detection weights you have to provide so this is my YOLO weight and then we are providing re-id weights and my future extractor model is here and then say video will save the results when you will execute it your results will be stored and let's see the results which we have so you'll get a runs folder after running that command inside that run you'll get a track and then you have exp 3 and let's run this video so you can see this board underscore shape underscore yacht this is the detection and this is the confidence with it and in the front of that there is a ID over here also we are getting some ID so we are getting IDs because we are using tracking and the object detection model is providing us the detection classes so this is how you can run your object tracker and guys uh you can visit this GitHub repo over here they have explained how to use different tracking methods so right now the tracking method which I have used is strong sort okay but if you want to use this OC sort or byte track you just need to pass this flag tracking method equal link for example this copy this go to here and just put this in your command like this okay so you can use strong suit over here if you'll write by track here then by track um tracking algorithm will work so in the same way you can run the other one OC suit okay so these are the different tracking methods you can use and then sources if you want to use work on web camera just provide zero as a source if you're working on an image you can write like this and if you want to provide a rtsp stream then you can you know provide the source like this so guys this is how you can use object tracker on custom data set and I hope this video is helpful thank you for watching
Info
Channel: Code With Aarohi
Views: 5,192
Rating: undefined out of 5
Keywords: YOLOv8, Object Tracking, deepSORT, ByteTrack, StrongSORT, Object Detection, Tracking, Detection, python, pytorch, AI, Artificial Intelligence, Deep Learning, objectdetection, objecttracking
Id: EW24HKiUjbI
Channel Id: undefined
Length: 16min 7sec (967 seconds)
Published: Sat Jan 28 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.