Train Yolov8 Semantic Segmentation Custom Dataset on Google Colab | Computer vision tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Hey my name is Felipe and welcome to my channel.  In this video I'm going to show you how to train   a semantic segmentation model using YOLO V8 and  Google Colab, and this is exactly the notebook we are   going to use in this process you can see we have  only one two three four five steps so in only five   steps you will have completed this process and you  will have trained your own semantic segmentation   model using YOLO V8. Now this is very important,  this tutorial is only a very quick and a very   high level description of the entire process  if you want more details or if you want a much   more comprehensive tutorial then I invite you to  take a look at other of my previous videos where   I show you exactly how you need to prepare the  data how you need to annotate the data how to   train the model and how to evaluate the model you  trained this other video this other tutorial is a   much more comprehensive description of the entire  process so if you want more details I definitely   invite you to take a look at this other video  which I'm going to be posting over there, now let's   continue now let me show you something else which  is the Google Drive directory we are going to   use in this project you can see we have three  files one of them is config dot yaml then data.zip   and then train. ipynb this file over here is the  notebook in which we are going to be working in   today right train. ipynb is the notebook we are  going to use in order to train this model and then   we also have these two other files, now it's very  important you create a directory in your Google   Drive in order to work on this project you can  see the directory I have created is located in   my drive computer vision engineer semantic  segmentation YOLO V8 Google collab that's   the directory I have created in order to work  on this project but please remember to create   a directory in your Google Drive which is where  you're going to upload all the data and all the   files and so on so please remember to create a  directory in your Google Drive and then I'm going   to show you all the files you need to upload into  this directory now let me show you this file over   here which is data.zip let's go to my file system  you can see that this is the file I show you in   my Google Drive data.zip and now let me show you  how this file looks like you can see we have two   folders one of them is called images and the other  one is called labels within the images directory   we have three additional directories one of them  is called test the other one is called train and   the other one is called validation and within  each one of these directories is where we have   all the data we are going to use in order to train  the model now let me show you how this data looks   like in my case I am going to train a semantic  segmentation model in order to detect ducks and   alpacas so you can see that all the images I have  over here they are about ducks and alpacas right   because these are the two categories I am going  to detect with this model now let's go back here   to this other directory which is called labels  and this is where we're going to locate all the   labels all the annotations we are going to use in  order to train this model now if you want a much   more comprehensive description of how you need  to prepare all these files which contain all the   annotations all the labels for this process then  I invite you to take a look at this tutorial over   here at these instructions which are in Yolov8  official website and these are the yolov8 official   instructions on how you need to prepare all the  annotations... in order to train this   model because if I show you how these files look  like you can see that they look very weird right   we have many many numbers and this is a very long  file so in order to properly understand how to   create each one of the files you need to locate  here and within each one of these directories then   I invite you to take a very close look to these  instructions which are provided by YOLO V8, now   a very quick note in my case I have downloaded  all this data from the Google open images dataset   version 7 from this website over here which is  the Google open images dataset version 7 this is an   amazing dataset I have used it I don't know how  many times many many many times an amazing dataset   but downloading this data was a very time  consuming process so what I decided to do in   order to help you to create your dataset to train  your semantic segmentation model with yolo V8   is to create a python script which takes care of  the entire process of downloading this data in a   super super super easy way the only thing you need  to do in order to use this script is to edit these   values over here which are all the class names  all the categories you want to download from the   Google open images dataset version 7 in my case  I have selected these two class names which are   alpaca and duck so this is why I have downloaded  all the images and all the annotations for alpacas   and ducks but just replacing these values for  whatever other categories whatever other class   names you want you will be able to use this script  in order to download a semantic segmentation dataset   from the Google open images dataset version 7 this is  an amazing resource I created in order to help   you to train this model and this is a resource  which is available in my patreon so it's going   to be available to all my patreon supporters now  let's continue now let me show you something which   is very important you can see that although we  have three directory over here which are test   train and validation I am going to use only these  two which are train and validation I am not going   to use the data from this directory which is the  test directory this is a very very quick note and   now let's continue so I'm only going to use these  two train and validation once you have created a   dataset once you have created a directory with  all the data which is formatted exactly like I show   you over here then the only thing you need to  do is to zip this directory and to create this   data.zip file and once you have this file the  only thing you need to do is going back to your   Google Drive and to upload this file into your  directory into your Google Drive and that's it   and now let's get back to the Jupiter notebook  we are going to use in order to train this model   because now it's time we start with this process  now I'm going to execute absolutely every single   cell in this notebook and this is how we're going  to train our semantic segmentation model so I'm   going to start executing this cell over here  which is going to mount my Google Drive into   this Google colab instance right I am going  to connect Google drive with Google collab   and this is very important because we need to  access all the data and all the files we have in   this directory we have created in Google Drive  so the only thing I'm going to do is to select   my account and to click allow okay now our Google  Drive is mounted into our Google collab environment   and now we're ready to continue with the next  step in this process which is preparing the   data or actually this is going to be getting the  data getting this file this data.zip file from   our Google drive into this instance and then this  going to take care of unzipping this file and   taking all the files which are within right so the  only thing I'm going to do is to press enter and   this is going to take care of getting the file  getting the data.zip file and then unzipping   the file and unzipping all the content we have within  this file okay so we have extracted all the data   we have within this file we have extracted all  the data in this environment in this Google Colab   environment now let's continue now the next step  in this process is installing ultralytics, ultralytics is the python library we are going to use  in order to train this model with yolov8 so this   is a very important library and the only thing  I'm going to do is to execute this cell,   now let's continue and you can see we have already  executed three steps in this process we have only   two steps left in order to train this model, this is  amazing, and now the next step will be training   the model and in order to train the model now I  am going to show you this file which is config   dot yaml which is the other file we have in the in  the directory in our Google Drive so let me show   you this file this is a file with very important  configuration in order to train this model and   if you have done everything as I showed you in  this tutorial so far then you can just leave   these three values as they are currently right now  and everything is going to work just fine this is   the location the absolute location the absolute  path to your data and this is the relative path   the relative location of your training images and  this is the relative location of your validation   images and when I say relative I mean relative  to this location over here right so long story   short if you have done everything as I  showed you in this tutorial then you can just   leave everything as it is right now and everything  is going to work just fine let me show you these   two other keys we have over here which are nc  and names, nc stands for number of classes and   in my case I have two classes so in my case this  is two and then this is a list of all the classes   we are going to detect with this model remember in  my case I'm going to detect alpacas and ducks so   this is exactly how this looks like in my case,  for me, but please remember to update these two   values with all the class names and all the number  of classes you are going to detect with your model,   right, so this is a very quick description of this  file which is config dot yaml and this is a very important   file in order to continue, something that's very  important is that please remember to update this   path over here with the path of the directory you  have created in your Google Drive remember in my   my case this is my drive computer vision engineer  semantic segmentation YOLO V8 Google collab which   is exactly what I have over here but in your case  please remember to update this directory because   otherwise this is not going to work right and the  same goes for this other value over here please   remember to update this directory otherwise  nothing is going to work this should be the   directory in your Google Drive the directory you  have created in your Google Drive now let's continue   now the only thing I'm going to do is to execute  this cell and this is going to take care of the   entire process of the entire training process so  I'm just going to press enter and then we need   to wait for a few minutes or actually for a few  hours until this process is completed okay now   the training process is completed and this is all  the output which was generated during the training   process you can see this is a lot of information  and from all this information over here we are   going to mind this value which is where all  the data was saved all the data we need in order   to evaluate this model and this is also where  the weights we generated with this model where   the weights were saved and everything is located  here in run segment train so what we are going to   do now is we are going to get all the data which  was saved here in this directory and we're going   to copy this data into our Google drive this is  very important because this is going to make the   process of downloading this data much easier so  what I'm going to do now is executing this cell   and this is going to take care of copying all  this directory into my Google Drive okay now   going back to Google Drive this is the directory  we have just copied and what I'm going to do now   is to download this directory into my computer  into my file system and now let me open this file   so I can show you how it looks like and basically  this is where we are going to have all the images   and all the data we need in order to evaluate  this model and also this is where we're going   to have all the weights of the model we created  now let me show you a couple of these images so   I can show you how they look like I'm going to  open this image over here which is val batch 0 pred   and you can see that these are a few examples  of how the model performs with a few images so this   is an image we can use in order to evaluate  this model now let me open this other image   which is val batch 1 pred and you can see that this is  exactly the same this is how the model performs   with a few images in this case we have images of  ducks and we also have images of alpacas so this   a very good way to evaluate the model performance  these are images from the validation set from   the validation directory now let me open another... another image which is val batch 2 pred   and this is exactly the same this is how the model  performs with a few images now in this video...   this was remember a very quick and a very high  level description of the entire process I am not   going into the details of the model evaluation if  you want more details on how to make sense of all   the data we have over here within this file then I  invite you to take a look at this other video this   other tutorial which is a much more comprehensive  description of the entire process and in this   other video I show you how to evaluate the model  and how to analyze all the information we have   over here so I definitely invite you to take a  look at this other video and this is going to be   all for today my name is Felipe I'm a computer  vision engineer and see you on my next video.
Info
Channel: Computer vision engineer
Views: 1,993
Rating: undefined out of 5
Keywords:
Id: _tLeuN741Uc
Channel Id: undefined
Length: 13min 26sec (806 seconds)
Published: Thu Nov 30 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.