Auto Annotation for generating segmentation dataset using YOLOv8 & SAM

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone this is arohi and welcome to my channel so guys in my today's video I'll show you how to perform Auto annotation on a data set for image segmentation so annotating our image segmentation data set is more time consuming as compared to The annotation of object detection because image segmentation annotation requires pixel level annotation where we provide class label to each pixel of an image on the other hand an object detection annotation we provide bounding boxes for the objects the objects we are interested in okay and guys in last month only April 2023 meta AI release their segment anything model which is an instant segmentation model and this model was trained on a very big data set that has moved over 1 billion mask on 11 million images okay and this data set is the largest data set for Ms segmentation till now okay so recently uh ultrolytics company they implemented that Sam model in their ultralytics package and then they they have created a feature with the name of Auto annotation that auto annotation feature will you know using that feature you can perform image segmentation you can prepare your image segmentation data sets um automatically without uh without doing the manual labeling without doing the manual annotation so today I'll show you how to use the ultralytics auto annotation feature so that you can prepare your own image segmentation data set and the only thing is you need uh object detection pre-trained model using that object detection pre-train model you can create your annotations annotation files for segmentation tasks so let's see how to perform that okay so here so guys the python version I'm using is 3.9 and the torch version is 2.0.1 and the Cuda is 11.7 and I'm working on RTX 3090 GPU and the ultralytics version I'm using is 8.0.106 okay so these are my versions you if you are trying ultralatics for first time so you just need to perform pip install ultrolytics and your environment will be ready to execute this code okay once you install the ultralytics after that you only need to import this so guys first I'm showing you how to see how to use the segment anything model okay with ultralytics you want to suppose you have an image and you want to put mask on that image so how to perform that using ultralytics which implemented that Sam model in it okay so you just need to import from this we are importing the Sam model and there are two kind of models of Sam one is Sam underscore L which is a large model and underscore b means base model okay so first I am using the base model we are just calling the Sam like this and provide the model so this is a trained model okay so you will directly now we are providing the image to it on which we want to perform the segmentation so model dot product just provide the path of the image so my image is in images folder okay let me show you the full device so this is the images folder inside it I have an image with the name of one dot jpg this is my image on this image I want to perform the segmentation okay so let's run the code when you will run it so your result will store like this okay inside you will get a runs folder inside that you will get a segment folder and this is my folder where the segmentation masks are there okay so let's open it and see so runs segment predict 4 and this is the image okay with the segmentation mask okay so this is how you can use Sam for segmentation Mass segmenting your image and this is how you will see the results okay now suppose you want to right now see guys our results got stored in predict folder uh sorry runs folder okay but what if you want to see the output image on the screen right now so then you just need to put this show equals to true and then run this command then what will happen is you will get the image with the segmented with the mask segmented image you will see it on the screen okay so let's let's execute it so when you'll execute it so you can see here so this is the image okay this is how it works now let's suppose till now we have tried our image but let's see if you want to try it on a video then what you need to provide just over here earlier we provide the image path now provide your video path so my video is this videos folder let's open a videos folder first inside this video this is the video on which I'm testing Okay so no let's run our code okay so if you'll write show equals to true then the video will open right the process will going on segmentation will get performed but side by side you will see a video with the segmentations okay so let's execute it so the processor started now you can see here the video is opening so it is working on each frame of a video one by one so it will take some time but this is how it works okay so if you want to stop the process in between you can do that otherwise see you can see for all the frames so we have 199 frames so all the frames it is um you know working on all the frames of the video one by one okay so after that if you want to work on a web camera so what you need to do just provide Source equals to zero and it will work on a web camera also okay so this is how you do if you do want to see the result on the screen then you can remove this show True from here okay now the next thing is now you know how to use Sam model to view videos to um to on images and on web camera now let's generate the annotations for the images okay so Auto annotation task the task which I told you in the beginning we are going to do that's what we are starting now okay so from ultralytics YOLO data and annotate there they have a function Auto annotate okay so let's open the ultralytics repo so this is the ultralytics trapper okay inside this ultralytics YOLO and then data inside the data they have annotator when you open this annotator there they have a function with the name of Auto annotate you can see this function okay so this this function is responsible for putting the masks on the images okay okay so now let's see over here so we are calling that auto annotate function after that so data equals to images so here you need to provide the path of the folder where your images are okay so I have these images for these two images I want to create uh annotation files okay so how YOLO works for each image you will get a one annotation file okay so we have two images in our data set so you will get corresponding to annotation file one file will have The annotation it all of this and other file will have The annotation detail of this okay so let's come here then you will provide the detection model so guys this Auto annotation feature how this feature works so the detection model the pre-trained detection model okay you this is a mandatory step you need a detection model okay so with the help of the detection model you will get the bounding boxes on the objects you are interested in those bounding boxes will go to the segment anything model then segment anything model will put a mask on the area where the bounding boxes are so that's why you need a bounding boxes because why we need this step because segment anything model can only put mask and there are no corresponding labels for them okay so when meta AI they they trained the Sam model there were no labels attached to the masks okay so that's why we need a object detection model object detection model will put up bounding boxes on the objects okay and it will give you a class label then that will be that bounding box will be the input to the segment anything model that segment anything model will then put a segmentation then we'll put a mask on the on the area the bounding boxes okay so that's why so this is the detection model and this is the Sam model and this is the folder in which our data set is and I want to have The annotation files for that so when you'll execute it when you will execute it it will create a labels folder and inside that labels folder you will see The annotation files okay so now in my case I have two images now let's see the labels folder here is a label for folder open it so you can see the two annotation files the first file will have the first file will have The annotation okay segmentation annotation and what is there this first two why this is 2 over here this is the class ID so in Coco data set for car the class ID is 2 so that's why we have a class ID and this is the segmentation these are the different the points okay the annotated annotation points okay so in the same way for Class 2 also you can see we have The annotation file so guys you can know from here the hard C uh using this feature you can save lot of your time if you have large data sets and segmentation The annotation for segmentation task is very time consuming right and you have to do it very carefully but you with the help of the detection model and the segmentation model you can do it in very less time and efforts will be less and obviously you will get a better accuracy okay so now the thing is let's see that auto annotate function so this is the auto annotate function okay so what is happening in this order no trade function so these are the things detection model is here segmentation model is here then this is how you perform uh detection right in Yolo V8 How We call we are calling our detection model okay here then we are providing that data to it and the results are getting stored in this then we are using a for Loop because what we want is we want to if you are working on a video or on a stream then for each frame you have to get the bounding boxes and the class labels so we are using Loop in that and inside that Loop over here you can see here we are using the segment anything model okay so in this Auto annotate function what is happening first we are performing detection detections are stored in date underscore results and then we are fetching the boxes and the class IDs and then inside that using this line set image set image is a function of the segment anything model so whenever you want to give image to a segment anything model so we use this set image okay so that's what we are doing over here and we are providing the image to it and this here we are running the Sam model and here we are updating the results and this few these few lines are responsible to write the annotations in the text file in the labels folder okay so guys this is how you can use the auto annotation feature of the ultralytics which is using Sam model Sam model is developed by meta AI so I hope this video is helpful thank you for watching
Info
Channel: Code With Aarohi
Views: 21,642
Rating: undefined out of 5
Keywords: objectdetection, imagesegmentation, sam, yolov8, computervision
Id: K_WYmsYhBEw
Channel Id: undefined
Length: 14min 9sec (849 seconds)
Published: Tue May 23 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.