YOLOv8 Instance Segmentation on Custom Dataset | Windows & Linux

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone in today's tutorial we are going to do custom instance segmentation using YOLO V8 I'm going to explain how to annotate custom data set for instance segmentation and then how to convert it into YOLO V8 format then we will train the segmentation model and run inferencing on images videos and webcam I will also show you how you can export the model to other formats such as onnx and TF Lite the timestamps are in the description below let's get started it's always a good idea to create a virtual environment for your project so let's do that I am on Anaconda so I am going to issue the command conda create minus n YOLO V8 underscore segmentation and python equals 3.10 hit enter press y hit enter again once the installation is done let's activate the environment by command conda activate YOLO V8 underscore segmentation that's it the environment is activated now we need images of our object to annotate I have downloaded images of butterflies from Google if you do not want to do it manually you can follow these timestamps of this tutorial to write a small script for downloading the images from Google automatically all right to annotate the segmentation masks we need to install a library by command pip install label me and once the installation is done just type label me and hit enter it's going to open this GUI go to file and click on open dir it's this directory with all the images of our object select that then again go to file and click on change output dir it's the folder in which the annotations should be stored select the same folder then click on file and select save automatically Now to create a segmentation mask click on create polygons and then draw a polygon around the object as precisely as possible I am just going to do it quickly for this demonstration once you are done type a label for that polygon which is butterfly in our case now click on next image and repeat this process you should do this for all of the images and it's going to take a lot of time luckily I do not have to do this because there is an annotated data set of butterflies available at pixel lib GitHub repository so all the credits to the original author if we go to releases and then scroll down and then expand these assets of nature section here we have nature dot zip let's download it and then extract the zip file you will see there are two folders train and test each image has its corresponding label in a Json format this data set has two classes butterfly and squirrel I am just going to keep the images and annotations of butterflies and going to delete all other images and their annotations then in test folder repeat this process and only keep images and annotations of butterflies then create a new folder called data set and move both train and test folders there the images that I have annotated I'm going to move them along with their annotations to the train folder then we can delete nature.zip and nature folder the data set is ready but it's not in Yolo format yet to convert it into a format that yellow V8 understands we need to install another Library by command pip install label me to YOLO once installation is complete issue the command label me to YOLO minus minus Json underscore dir and then path to our train directory which is data set slash train hit enter and it will convert all Json files to YOLO v8txt format if we go to our data set and then train folder we can see a subfolder that is called YOLO V8 dataset let's cut this and move to the root folder inside this folder we have images and labels placed separately along with dataset.yml let's leave this file unknown for now as we are going to come back to it later inside images we have train and valve folders separated but we do not want that so let's just merge the images of these two folders so move all the images inside well to the main directory and then all the images inside train also go to the main directory now we can delete these three empty folders similarly go to labels directory and you will see train and well splits here just move all txt files from train and Val folders to the main directory now let's rename YOLO data set folder to train now back to Anaconda prompt issue same command but this time the target directory is data set slash test and if we hit enter all Json annotations will be converted to YOLO annotations now if we go to data set and test folder here you will see the same output as before and you guessed it we need to bring it to the root directory of our project alongside train directory if we open it we can see images and labels subfolder along with dataset.yml file inside images directory we have train and Val folders We have to merge all the images of well and train folders just like we did earlier and then we can delete these empty directories same goes for the labels folder as well now we rename this YOLO dataset folder to test now we can delete data set and Butterfly directories once that is done just cut dataset.yml file from either train or test folders and bring it to the root directory of our project I'm Gonna Move It from train folder so let's open this file in your favorite text editor and you will see a bunch of parts along with the total number of classes which is correctly mentioned as one and its respective class name for train path we need to provide full path of our train folder so I'm just gonna copy this path from here and paste in the file then add train folder to it do the same for Val data we are just going to use test folder as our validation data then remove the test path from here save the file and we have our data set ready now to train YOLO V8 custom instance segmentation model install another Library by command pip install ultralytics it's going to download and install all the dependencies required to run YOLO V8 but by default it installs pytorch with CPU support only we can verify that by running python then import torch and issue command torch.cuda dot is underscore available and it prints false we can also verify the version by command torch dot underscore underscore version underscore underscore and it's 1.13.1 CPU version we're going to install the same version but with GPU support as I do have Nvidia GPU so head over to the official Pi toss website click on get started scroll down select version then operating system which is Windows in my case then pip then Python and Cuda 11.7 which is the latest version now copy this command from here and paste it in the Anaconda prompt add another parameter minus minus upgrade here and hit enter it's going to download and install GPU version of Pi torch and previous CPU version will be uninstalled automatically after installation is done run python import torch and now if we issue the command torch dot Cuda dot is underscore available it's going to print true and we can also see that installed version is 1.13.1 with Cuda 11.7 now to train the instant segmentation model on this custom data run the command yellow task equals segment model equals train epochs let's do training for 100 epochs then data which is this file dataset.yml and then model equals now let's head to the official ultralytics GitHub repository and scroll down to segmentation section expand that you can choose any model from here I'm going to copy the link of medium model paste it somewhere and just copy the file name from here then paste it in the command now add another parameter IMG SZ which is the image size on which the model was trained we can see on the official repository that it's 640 so mention that here and finally batch equals you can mention small batch sizes if you have a GPU with less memory I think I can use it so let's start with that here the training is started I will get back to you when it is done all right the training is done it took 4.3 hours to train and output is stored in runs folder so let's head to runs segment and then train folder you can see all the metrics here and if you open results dot PNG we can see the graphs of the training but what we are really interested in is the weights file so go to weights folder and we are just going to copy best.pt and head back to our main project folder paste the file here I'm going to rename this file to YOLO V8 M Dash seg Dash custom.pd now we can use this model for inferencing for that let's take an image from test folder and paste here I am going to rename it to one dot PNG for convenience I also have a video called butterfly.mp4 that I'm copying here now create an empty text file and rename it to predict dot Pi open it in any text editor then from ultralytics import YOLO now initialize YOLO class as model with the custom segmentation model file name now we can call model dot predict method as a source let's use our one dot PNG image then set show equals to true save equals true height underscore labels equals false hide underscore conf equals false conf equals 0.5 save underscore txt equals false then save underscore crop equals false finally line underscore thickness equals 2. that's how our whole command looks like now save this file and let's head back to Anaconda prompt and issue command python predict.pi it's going to produce output in runs segment and then predict folder and here we can see the output now if you do not like the labels and confidence scores you can set height underscore labels and height underscore conf as true and this time if we run the script we are going to get the output without labels or confidences let's set hide underscore labels and hide our Sultans back to false and now set save underscore txt and save underscore crop to True run the script again and now you will see that the output also has this text file which contains the polygons labels as YOLO V8 format add in crops folder we have the object cropped out they fix the bounding box showing on the cropped object which is great to see let's change the save underscore crop and save underscore txt back to false now if you want to hide the bounding box and only want to show the mask on the output you can use parameter box equals false but right now it's not working as intended maybe it will be updated in the future versions so let me know in the comments when it happens then we have another parameter visualize equals true which is used for visualization of features but this is also not implemented yet as you can see by the message here so maybe this will be added in the future versions as well let's set visualize back to false and box equals to true now to run this custom model on videos just change the source to your video I am using butterfly.mp4 now save the file and run the script again this time it will run custom instance segmentation on video the output is also stored in runs slash segment and predict folder just like in the case of image now if you want to run this custom model on webcam just change the source to zero to use internal webcam and it works just fine finally let me show you how you can export the model to other formats so let's comment this line add another line here model dot export and then mention the format equals onnx if we run this script it will install onnx and will export the model which you can see over here similarly you can also convert to TF Lite by mentioning TF Lite here if you run the script it will install tensorflow and then export the model to EF light you can visit official documentation which lists all the formats that you can use for exporting the custom model with that I think I am done if you have learned something of value today hit like And subscribe to the channel consider a support on patreon to help the channel out I will see you next time [Music] thank you [Music]
Info
Channel: TheCodingBug
Views: 43,543
Rating: undefined out of 5
Keywords: yolov8, yolo, yolov8 tutorial, yolo v8 tutorial, yolov8 custom dataset, yolov8 windows, yolov8 custom, YOLO v8, yolov8 linux, yolo v8 windows, custom yolov8, yolo v8 custom dataset, yolo v8 custom, custom yolo v8, yolo v8 custom data, yolo v8 custom training, yolo v8 custom instance segmentation, yolo v8 custom segmentation, yolov8 custom segmentation, yolov8 custom instance segmentation, yolo v8, yolo instance segmentation tutorial, real time instance segmentation yolov8
Id: DMRlOWfRBKU
Channel Id: undefined
Length: 14min 13sec (853 seconds)
Published: Wed Feb 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.