Image segmentation with Yolov8 custom dataset | Computer vision tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so in my previous videos I showed you how  to train an image classifier and an object   detector using yolo V8 now is the time for  semantic segmentation I'm going to show you   the entire process of how to train a semantic  segmentation algorithm using yolo V8 from   how to annotate the data how to train  the model in your local environment and   also from a google colab and finally a super super  comprehensive guide on how to validate the model   you trained my name is Felipe welcome  to my channel and now let's get started   so let's start with tpday's tutorial and the first  thing I'm going to do is to show you the data   we are going to be using today so this is a  dataset I have prepared for today's tutorial and   you can see that these are images of ducks we are  going to be using a duck dataset today and this is   exactly how the images look like now for each one  of our images for absolutely every single one of   our images we are going to have a binary mask we  are going to have an image a binary image where   absolutely every single Pixel is either white or  black and absolutely every single white pixel it's   the location of our objects all the white pixels  are the location of the objects we are interested   in in this case the objects are our Ducks so let  me show you an example so it's a little more   clear what I mean regarding the white pixels are  the location of our objects so this is a random   image in my data set this a random image of a duck  and this exactly its binary mask so take a look   what happens when I align the these two images  and when I apply something like a transparency   you can see that the binary mask is giving us the  exact location of the duck in this image so this   is exactly what it means that the white pixels are  the location of our objects so this is exactly the   data I am going to using in this tutorial and now  let me show you from where I have downloaded this   data set this a dataset I have found in the  open images dataset version 7. let me show you   this dataset super super quickly this is an  amazing dataset that you can use for many different   computer vision related tasks for example if I  go to segmentation you can see that we have many   many many different categories now we are looking  at a random category of phones this is for example   a semantic segmentation data set of phones and  let me show you if I go here and I scroll down   you can see that one of the categories is here  duck so this is for example the exact same data   I am going to be using in this tutorial this is  the exact same duck dataset I am going to be   using in order to train a semantic segmentation  algorithm using yolo V8 and obviously you could   download the exact same data I am going to use in  this tutorial if you go to open images dataset   version 7 you can just download the exact same  duck dataset I am going to be using today or you   can also download another dataset of other categories  so this is about the data I am going to use in   this project and this is about where you can  download the exact same data if you want to, now   let me show you a website you can use in order  to annotate your data because in my case I have   downloaded a dataset which is already annotated  so I don't have to annotate absolutely any of my   images absolutely all of my images already have  its binary masks right I already have the masks   for absolutely all the images in my data set but  but if you're building your data set from scratch   chances are you will need to annotate your images  so let me give you this tool which is going to   give you which is going to be super super useful  in case you need to annotate your data it's called   cvat and you can find it in cvat.ai and this is  a very very popular computer vision annotation   tool I have used it I don't know how many times  in my projects and it's very very popular and   it's very useful so I'm going to show you how to  use this tool in order to annotate your images   so the first thing we need to do is to go to  start using cvat this is going to ask you to   either register if you don't have an user already  or to login right I already have an user so   this is logged into my account and now let me show  you how I'm going to do in order to annotate a few   images actually I'm going to annotate only one  image because I am only going to show you how to   use it in order to create a binary mask for your  project but I'm just going to do it with only one   image because it's yeah you only need to see the  process and that's going to be all so I'm going   to projects I'm going to here to the plus button  create a new project the name of this project will   be duck semantic sem seg this will be the name  of my project and it will contain only one label   which is duck so I'm going to press continue  and that's pretty much all submit and open now I'm going to create a task this is already  yeah create new task the task name will be duck task zero one it doesn't really matter the name so  I just I just selected a random name then I'm   going to add an image I'm just going to select  this image I'm just going to annotate one image   so this is going to be enough and submit and open  so this is going to take a couple of seconds this   is where you need to select all of your images  all the images you want to annotate but in my   case I'm only going to select one so I'm going to  press here in Job so this is going to open The   annotation job right now I'm going to show you how  you can annotate this image how you can create a   binary mask for this image you need to go here to  draw new polygon then shape so I'm going to start   over here and this is pretty much all we need to  do in order to create this semantic segmentation   data set for this image right in order to create  the binary mask for this image you can see that   I'm just trying to follow the Contour of this  object and you may notice that the Contour   I am following is not perfect obviously this is  not perfect and it doesn't have to be perfect   if you're creating a dataset if you are creating  the mask of an image if you are creating the   mask of an object then it definitely doesn't need  to be pixel wise perfect right you need to make a   good mask obviously but something like this as  I am doing right now will be more than enough   so this is a very time consuming process you  can see and this is why I have selected only   one because if I do many many images it's going  to take me a lot of time and it doesn't make any   sense because the idea is only for you to  see how to annotate the images right so you can   see that I'm following the Contour okay and this  is an interesting part because we have reached the   duck's hand or its leg or something like that  this part of the duck's body and you can see   that this is beneath the water this is below the  water and this is where you're going to ask   yourself do you need to annotate this part or not  do you need to annotate this part as if it's part   of the duck or not because you could say yeah  it's definitely part of this duck but you are not   really seeing a lot of this object right it's like  part of the water as well so this is where you're   going to ask yourself if you need to annotate this  part or not and in my case I'm going to annotate   it but it's like you can do either way in all of  those sections in all of those parts where you are   not 100% convinced then that's like a discussion  you could do it you could not do it it's up to you   so annotating a few images is always a good  practice because you are going to see many many   different situations as I have just seen over  here right where I have just seen with this part   of the duck which now I am super super curious  what's the name if you know what's the name   of this part of the duck's body please let  me know in the comments below I think it's   called hand right because it's something like  a hand they have over there but let me know if   it has another name and you if you know  it please let me know in the comments below   now let's continue you can see I'm almost there  I have almost completed the mask of this duck   now I only have to complete this  peak or whatever it's called   it seems I don't really know much about ducks  Anatomy I don't really know what is the name of this   part either so anyway I have already completed  and once I am completed I have to press shift N and   that's going to be all so this is the mask this  is a binary mask I have generated for this object   for this duck and this is going to be pretty  much all what I have to do now is to click save   you can see that this is this is definitely not  a perfect mask this is not a perfect like pixel   wise perfect mask because there are some parts  of this duck which are not within the mask but it   doesn't matter make it as perfect as possible but  if it's not 100% perfect it's not the end of the   world nothing happens so I have already saved this  image and what I need to do now is to download   this data so I can show you how to download the  data you have just annotated in order to create   your data set so this is what I'm going to do  I'm going to select this part this option over   here and I'm going to export task data set and  then I'm going to select this option which is   segmentation mask 1.1 I'm just going to  select that option and I'm going to click   ok so that's going to be all we only  need to wait a couple of minutes and   that's pretty much all the data has been  downloaded now I'm going to open this file   and basically the images you are interested in  are going to be here right you can see in my   case I only have one image but this is where  you're going to have many many many images   and please mind the color you will get all these  images in right in my case I have a download this   image in red it doesn't really matter just mind  that you could have something different than   white but once you have all your images what  you need to do is to create a directory I'm   going to show you how I do it I am going maybe  here and I'm going to create a very temporal   directory which I'm going to call tmp and this  is where I'm going to locate this image right   and I am going to I'm going to create two  directories one of them is going to be masks and then the other one is going to be called  labels and you're going to see why in only a   minute and this is where I'm going to locate  the mask here and then I am going to pycharm   because I have created a script a python script  which is going to take care of a very very very   important process we have created masks which are  images which are binary images and that's perfect   because that's exactly the information we need in  order to train a semantic segmentation algorithm   but the way yolo V8 works we need to convert  this image this binary image into a different   type of file we are going to keep exactly the  same information but we are going to convert   this image into another type of file so let  me show you how this is a phyton file I have   created in order to take care of this process  and the only thing you need to do is to edit   these fields this is where you're going to put  all the masks this is a directory which is going   to contain all the masks you have generated  and this is going to be the output directory   you can see that these two variables are already  named properly in my case because this is the tmp   directory I have just created this is where  I have located the mask I have just generated   with cvat and this is my output directory  so take a look what happens when I press play   so the script has just been executed everything  is okay this is the mask I have input and   this is the file which was generated from this  mask and this looks super super super absolutely   crazy right it's a lot of numbers it's like  a very very crazy thing without going to the   details let's just say that this is exactly the  same information we had here this is exactly   exactly the same information we have here but in  a different format let's let's keep the idea right   exactly the same information in a different format  and that's exactly the format yolo V8 needs   in order to train the semantic segmentation  model so this is exactly what you need to do   once you have created all of your masks you need  to download these files into your computer and   then please execute this script so you can  convert your images into a different type of files   and obviously this script will be available in  the GitHub repository of today's tutorial so   that's pretty much all in order to create  your annotations in order to download these   annotations and in order to format everything the  way you should now let me show you the structure   you need to format the way you need to structure  all of your file system so it complies with   yolov8 remember this is something we have already  done in our previous tutorials regarding yolov8   once you have your data you need to structure  your data you need to format your data you need to   structure your file system so yolo V8 finds  absolutely where everything is located right   you're going to locate your images in a given  directory you're going to locate your annotations   your labels in another directory so everything  is just the way yolov8 expects it to be   right so let me show you I have a directory  which is my root directory which is called Data   within data I have three directories but this  directory the Masks directory is not really   needed it's just there because that's the way I  got my masks my data but it's not really needed   in order to show you this directory which is the  one containing all of my binary masks in order   to be more clear that this is not needed for this  part of this process what I'm going to do is I'm   going to delete this directory right now it's  gone okay now we only have two directories and   these are exactly the directories we need in this  part of this process where we are creating all   the structure for our data so images you can see  that we have two directories one of them is called   images the other one is called labels within  images we have two other directories one of them is   called train and the other one is called val and  train is the directory where we are going to   have all of our training data this is where we are  going to have all of our training images these are   all the images yolo V8 is going to use in order  to train the model in order to train the semantic   segmentation model then val also contains images  and these are the images we are going to use in   order to validate the model right so remember you  need to have two directories one of them should be   called train is very important the name it should  be called train and the other one should be called val   now going back you can see that we have two  directories one of them is images the other one   is labels and if I go within labels you can see  that there are two directories also they are named   train and val and if I open these directories  these are the type of files I have generated with   the exact same script I showed you a few minutes  ago so within labels we have two directories   train and val and train are all the annotations  we have generated from the training data from the   training masks right and long story short we have  our root directory within the root directory have   two directories one of them is called images  the other one is called labels within images   we have two directories train and val within  train and within val it's all of our data all   of our images and within labels it's exactly the  same structure two directories train and val and   within train and within val it's where we locate  all of our annotations right that's exactly the   structure you need for your data please remember  to structure your file system like this otherwise   you may have an issue when you are trying to train  a semantic segmentation model using yolo V8   so that's pretty much all in order how to  structure the data and now let's move to   the interesting part let's move to the most fun  part which is training this semantic segmentation   model now let's move to pycharm and I will show  you how to train it from your local environment   so let's continue this is a pycharm project I  created for today's tutorial please remember to   install this project requirements otherwise you  will not be able to use yolo V8 now let's go   to train.py this is a python script I created and  this is where we are going to do all the coding we   need in order to train the semantic segmentation  model using yolo V8 and now let's go back to the   yoloV8 official repository because let's see  how exactly we can use this YOLO this model in   order to train this semantic segmentation model  I'm going to the segmentation section and I'm   going to click on segmentation Docs now this  is going to be very very straightforward I'm   going to train I'm going to copy this sentence  which is load a pre-trained model and then going   back to pycharm I'm just going to copy paste and  then I am going to from ultralytics import YOLO then I'm also going to copy this sentence  which is a model.train I'm going to change   the number of epochs of 2 something like one  because remember it's always very very healthy   it's always a very good idea to do like a very  dummy training to train the model for only one   Epoch to make sure everything is okay to make  sure everything runs smoothly and then you do   like a more more deeper training so I'm going  to change the number of epochs and then I'm   also going to change the config file I'm going  to use this config file which is a config file   I have over here and obviously you will find this  config file in the repository of today's video so   long story short you can see that you  have many many different keywords but the only   one that you need to edit is this one right this  is the absolute path to your data in my case if   I copy and paste this path over here you can see  that this is the directory which contains the   images and the labels directories so long  story short just remember to edit this path to   the path to the location of your data because  if you have already structured everything in   the way I mentioned in the way I show you in  this video then everything else will be just   fine right the train and the val keywords are  very good as it is I mean you can just leave   everything as it is but please remember to edit  this field which is the location of your data   now going back to train.py this is pretty much  all we need to do in order to train the semantic   segmentation model so I'm just going to press  play and let's see what happens and you can see   everything is going well we are training our model  but everything it's taken forever everything is   just going to take forever even though we are only  training this model for only one Epoch everything   is going to take a lot of time so what I'm going  to do instead is just press stop I'm going to stop   this training everything is going well I'm not  stopping this training because I had an error   or something no everything is going well but I am  going to repeat the exactly the same process from   a Jupiter notebook in my Google collab because  if I use Google collab I'm going to have access   to a free GPU and it's going to make the process  much much much much faster so I am going to use   a google colab in order to train this model and  I recommend you to use a Google collab as well   so I'm going to show you how to do it from  your Google collab environment please remember   to upload your data before doing anything in  Google colab please remember to upload your   data otherwise it's not going to work for example  here you can see I have many directories one   of these directories is data and within data you  have labels and images and these are exactly the   same directories I have over here so I have  already uploaded my data into my Google Drive   please remember to do it too otherwise you will  not be able to do everything we're going to do   just now right so that's one of the things you  need to upload and then also remember to upload   this config.yaml file the same file I showed you  in my local computer you also need this file here   the only thing you will need to edit is this  path because now you need to specify the path   the location of your data in Google Drive so I'm  going to show you exactly how to locate your data   into your Google Drive and now let's move to the  Jupiter notebook obviously I'm going to give you   this notebook this is going to be in the GitHub  repository of today's tutorial so you can just   use this notebook I'm just going to show you how  to execute absolutely every single cell and how   everything works right and exactly how everything  exactly what everything means right exactly what   are you doing absolutely every single cell so the  first thing I'm doing is just connecting   my Google collab environment we google drive because  remember we need to access data from Google Drive   so we definitely need to allow google collab to  access Google Drive so I'm just going to select   my account and then I scroll all the way down and  press allow that's going to be pretty much all we   need to wait a couple of seconds and now let's  continue what I'm going to do now is to Define   this variable which is data dir and this is  the location of my data in my Google Drive now   please mind this path this location because please  mind the way this is structured right please mind   the first word is content then gdrive then my  drive and then is my relative path to my data so   if you want to know exactly where you have upload  your data if you're not completely sure where you   have uploaded your data what you can do is to do  an ls like I'm doing right now and it's going   to give you all the files you have in the root  directory of your Google Drive then from there   just navigate until the directory where you have  uploaded your data in my case is my drive computer vision   engineer image segmentation yolo V8 and then  data that's exactly where my data is located in   Google Drive if I go to this directory you can see  that this is my drive then you can see that this   is my drive then computer vision engineer image  segmentation yolo V8 and then data and this   is exactly what I have over here so once you have  located your data the only thing you need to do is   to edit this cell and to press enter so everything  is ready now I'm going to install ultralytics so   I can use yolo V8 from The Notebook and this  is going to take a few seconds but this is going   to be ready in no time, something you need to  do from your Google colab is to go to runtime   and change runtime type just make sure it  says GPU just make sure you are using Google   collab with GPU because if you are not using a  google collab with GPU everything is pretty much   pointless right so just remember to check if  you are using a Google colab with GPU or not just   do it before you start all this process because  otherwise you will need to run absolutely   everything again so let's continue I have already  installed ultralytics and now I am going to   run this cell and if you realize this is exactly  exactly the same type of information the same code   I have over here in my local environment right I'm  just defining a model and then I am just training   this model so what I need to do now is just  press enter and also mind that I have specified   the config file right the location of my config  file and now I'm going to run a full training or   actually I'm going to run a training for 10 epochs  so this is what I'm going to do and this is also   going to take some time although we are going to  use a GPU this is going to take a few minutes as   well so what I'm going to do now is just I'm going  to wait until this is completed and I'm going to   pause my recording here and I'm just going to fast  forward this video until this process is completed   okay so the training process is now completed we  have trained this model and everything is just   fine and you can see the results have been saved  here under runs segment and train2. so the only   thing we need to do now is to get the results we  got from this training we need to get the weights   we need to get all the results all the different  metrics or the different plots because what we   need to do now is to analyze this training process  we need to validate that everything is just fine   right so what we are going to do now is to get  all this information and the easiest way to do   it is just running this command what we will do  when running this command we are going to copy   all the content in the this directory  where the results have been saved under our   Google Drive right remember to edit this URL, remember  to edit this path, this location, because   you want to copy everything into a directory into  your Google Drive so just make sure everything is   okay make sure this location makes sense and you  can just execute this cell and you're going to   copy everything into your Google Drive now let me  show you my Google Drive I have already executed   this cell so everything is under my Google Drive  this is the runs directory which was created when I ran   that cell and under this other directory which is  called segment we have train2 so these are all   of our results these are the results we are now  going to analyze so what I'm going to do now is   just to download this directory and once we have  this directory into our local computer then we are   going to take a look at all the plots at all the  metrics and I'm going to tell you exactly what   I usually do in order to validate this training  process so everything is now downloaded everything   is now completed and let's take a look at these  files so what I'm going to do is I'm just going to   copy everything into my desktop I need to do some  cleaning by the way so these are all the results   we got from this training process you can see that  this is a lot of information this is definitely   a lot of information right we have many many  different files we have many different everything   we have the weights over here we have a lot of  information so let me tell you let me give you   my recommendation about how to do this evaluation  how to do this validation from all these plots and   from all of these results I would recommend you  to focus on two things one of them is this plot   one of them is all of these metrics  and then I'm also going to show you how to take   a look at these results at these predictions  from these images but for now let's start here   you can see that this is a lot of information  these are a lot of metrics and you can   definitely knock yourself out analyzing all the  information you have here you can definitely go   crazy analyzing all of this information all  of these plots but I'm going to show you like   a very very simple and a very straightforward way  to do this analysis to do this validation this is   something that I have already mentioned in my  previous videos on yolo V8 on how to train a   model and how to validate this model which is take  a look what happens with the loss function take a   look what happens with your loss plots with all  the plots which are related to the loss function   and as this is a semantic segmentation type  of algorithm I would tell you take a look what   happens with this loss with the segmentation loss  I would say take a look what happens with the training loss and the validation loss and long story short just make sure the loss  function goes down right if your loss function is   going down it's likely things are going well it's  not a guarantee maybe things are not really going   that well and the model it doesn't really perform  that well it may happen but I would say that if the   loss function is going down it's a very good sign  if at the contrary your loss function is going up   I would say you have a very very serious problem  I would say there is something which is seriously   wrong with your training process or with your  data or with your annotations or with something   you have done something seriously wrong or there's  something seriously wrong with your data but I'm   talking about something amazingly wrong seriously  wrong right if your loss solution is going up   I don't know what's going on but something is  going on do you see what I mean so having a   loss function which is going down yeah it's not  a guarantee of success I mean it's not like it's   a good model for sure no you may have a  situation where you haven't trained a good model   and your loss function is going down anyway but I  would say that it's a very very good sign at the   very least your training loss and your validation  loss should go down and I'm talking about a trend   of going down right for example here we have a  few epochs in which the loss function is going up   that's okay that's not a problem we are looking  for a trend we should have a trend for the loss   function to go down and that's exactly what  we have in this situation so long story short   that's my recommendation on how to do this this  validation how to do this analysis on all the   metrics we have over here for now focus on these  two and make sure they are going down and then   in order to continue with this process with  this validation is that we are going to take   a look at what happens with our predictions how  is this model performing with some data with some   predictions and for this we are going to take  a look what happens with all of these images   right you can see that these are some batches  and these are some some of our labels some of   our annotations for all of these images and then  these are some of the predictions for these images   right so we are going to take a look what happens  here and for example I'm going to show you these   results, the first image, and you can see  that looking at this image which again these are   not our predictions but this is our data these are  our annotations these are our labels you can see   that there are many many many missing annotations  for example in this image we only have one   mask we only have the mask for one or four ducks  we have one two three four five dogs but only one   of them is annotated we have a similar behavior  here only one of the ducks is annotated here   is something similar only one of them is annotated  and the same happens for absolutely every single   one of these images so there are a lot of missing  annotations in this data we are currently looking  at and if I look at the predictions now these are  the same images but these are our predictions we   can see that nevertheless we had a lot of missing  annotations the predictions don't really look   that bad right for example in this case we are  detecting One Two Three of the five Ducks we   so we have an even better prediction that we have  over here I would say it's not a perfect detection   but I would say it's very good right it's like  it's not 100% accurate but it's like very good   and I would say it's definitely better than the  data we used to train this model so that's what   happens with the first image and if I take a look  at the other images I can see a similar Behavior   right this is the data  we used for training this algorithm and these are   the predictions we got for these images and  so on right it seems It's like exactly the   same behavior exactly the same situation for  this image as well so my conclusions by looking   at these images by looking at these predictions  is that the model is not perfect but I would say   performs very well especially considering that  the data we are using to train this model seems   to be not perfect seems to have a lot a lot  of missing detections have a lot of missing   elements right a lot of missing objects so  that's our conclusion that's my conclusion by   looking at these results and that's  another reason for which I don't recommend   you to go crazy analyzing these plots because when  you are analyzing these plots remember the only   thing you're doing is that you are comparing your  data the data you are using in order to train this   model with your predictions right the only thing  you're doing, you're comparing your data with   your predictions with the predictions you had with  the model right so as the only thing you are doing   is a comparison between these two things then  if you have many missing annotations or many missing   objects or if you have many different errors  in your data in the data you're using to train   the algorithm then this comparison it's a little  meaningless right it doesn't really make a lot of   sense because if you're just comparing one thing  against the other but the thing you are comparing   with has a lot of Errors it has a lot of  missing objects and so on maybe the comparison   doesn't make any a lot of sense whatsoever right  that's why I also recommend you to not go crazy   when you are analyzing these plots because they  are going to give you a lot of information but   you are going to have even more information  when you are analyzing all of these results   and this is a very very very good example of what  happens in real life when you are training a model   in a real project because remember that building  an entire dataset, a dataset which is 100%   clean and absolutely 100% perfect is very very very  expensive so this is a very good example of what   happens in real life usually the data you're using  to train the model, to train the algorithm has a   few errors and sometimes there are many many many  errors so this is a very good example of how this   validation process looks like with data which  is very similar to the data we have in real life   which in most cases is not perfect my conclusion  from this evaluation for this validation could be   improving the data taking a look what's going  on with the data and the next step would be to   improve the data and by looking at these results  one of the ways in which I could improve this data   is by using the predictions I'm getting instead  of the annotations I I used to train this model   you see what I mean if the annotations if the  predictions we are getting are even better that   the annotations maybe our next step will be to use  these predictions in order to train a new model do   you see what I mean so the by analyzing all of  these results you are going to make decisions   on how to move forward on how to continue and  this is a very good example of how this process   look like in a real project this is pretty much  how it works or how it looks like when you are   working in a project when you are working either  in a company or if you're a freelancer and you're   delivering a project for a client this is pretty  much what happens right there are errors things   happen and you need to make a decision given all  the information you get with all this analysis   so that's going to be all in order to show you  this very simple and very straightforward way in   order to validate this training process in order  to make some conclusions regarding what's going on   right and regarding to make some decisions you  know how to how to move forward this project   or this training process and now let me show you  something else which is within this directory the   weights folder this is where your weights will be  located right because if you are training a model   is because you want to have a model in order to  make predictions in order to make inferences and   this is where your models will be located this  is where your model will be saved and this is   something I have already mentioned in one of my  previous videos regarding yolo V8 remember you   will have two models one of them is called last.pt  another one is best.pt and the way it works is   that remember that when you are training a model at  the end of absolutely every single Epoch you are   updating your weights you are updating your model  so at the end of absolutely every single epoch   you already have a model which is available  which you can use if you want to so last.pt   means that you are getting the last model  the model you got at the end of your training   process so in this case I am training a network  for 10 epochs if I remember correctly so this is   the model we got at the end of the tenth Epoch and  then base.pt means that you are getting the best   model the best model you train during the entire  training process if I show you the metrics again   let's see the metrics over here you can see that  we have many metrics which are related to the   loss function and then other metrics related to the  accuracy on how this model is performing and the   way yolov8 decides what's the best model  in this case which is a semantic segmentation type   of problem may be related to the loss function  maybe it's taking the model at which you got the   minimum loss or it may be related to some of these  plots some of these performances which are related   to the accuracy to the performance maybe it's getting  the model for which you got the maximum Precision   for example or the maximum recall or something  like that I'm not 100% sure I should look at the   documentation but the way it usually goes is  that last.pt is the last Model you trained so   it's at the end of your training process and then  best.pt is your best model and this best model is   decided under some criteria so that's basically  how it works and what I usually do is taking   last.pt because I consider that last.pt is  taking, is considering, way more information much   more information because we are taking much more  data we are taking much more everything right in   all the training process we are doing many many  different things so if you take the last model   you are summarizing way more information that's  the way I see it so usually I take the last model   usually I take last.pt and that's pretty  much all in order to show you this validation   how validating this model looks like and now  let's move to the prediction let's see how we can   use this model in order to make inferences in  order to make predictions so let's see how we   can do that so let's go to pycharm let's go to  the pycharm project of today's tutorial and this   is a python script I created in order to do these  predictions this python file is called predict.py   and this is what we're going to do I'm going to  start importing from ultralytics import YOLO and   then I am going to define the model path the model  we are going to use which in our case let's use   last.pt from these results, from this directory,  so I am going to specify something like this...    last.pt and then let's define an image path let's  define the image we are going to use in order to   get our inferences so the image will be located...  this will be from the... from the validation set I'm   just going to choose a random image something  like this one so I am going to copy paste I am just going to paste it here so this  is the image we're going to use in order to test   this model in order to make our predictions  and now I'm going to import CV2 as well   because I'm going to open I'm going to read this  image and then I am going to get this image shape   so this will be something like this this will be  image and then this is image.shape okay and now   the only thing we need to do is to get our model  by doing something like YOLO and then model path   okay and then we are going to get the results by  calling model of our image right and this is it   this is all we need to do in order to get our  results in order to get our inferences but now   let's do something else I am going to iterate for  result in results and now let's take a look at   this mask let's take a look at this prediction  so I'm going to iterate like this for j, mask   in result dot masks dot data and then I am  going to say something like mask Dot numpy times 255 and this is our mask  and then I am going to resize it to the size of the image so I'm going to input the  mask and then this will be if I am not mistaking   the order is this one W and then H so this is just  the way it works this this is how we need to do it   in order to get the prediction and then in order  to resize this prediction back to the size of the   original image so this is how it goes and now the  only thing we need to do is to call CV2 imwrite   and I'm going to save it I'm going to save it here  and the name will be something like that let's   call it output this is only a test so we don't  really need to go crazy with the name let's call   it output.png and this will be our mask and that's  pretty much all that's pretty much all let's see   what happens I'm going to press play Let's see if  everything is okay or if we have some error okay   so I did get an error and yeah this is because  we need to enumerate I forgot the enumerate we   are not using J actually so I could just iterate  in mask but let's do it like this okay everything   ran smoothly everything is okay we didn't get any  error and now if I go back to this folder to this   directory I can see this is the output this is the  output we got and now in order to make absolutely   and 100% sure everything is okay and this is a good  mask this is a good prediction I'm going to make   an overlay I'm very excited I don't know if you  can tell but I'm very excited I'm just going to   take this image over here and then I'm going back  here and I'm going to take the original image I'm   going to do an overlay so this will be raise to top  I'm going to align these two images together and   now let's make a transparency and let's see what  happens and you can see that we may not get like   a 100% perfect mask but it's pretty well it's like  a very very good mask especially considering the   errors we detected in our data so this is amazing  this is a very good detection   this is a very good result so this is going to be  all for this tutorial and this is exactly how you   can train a semantic segmentation model using  yolo V8 and this is the entire process from how to   annotate the data how to train the model and how  to validate this model and then how to make some   predictions so this is going to be all for today  so thank you so much for watching this tutorial   my name is Felipe I'm a computer vision engineer  and these are exactly the type of projects I make in   this channel so if you enjoyed this video I invite  you to click the like button and I also invite you   to subscribe to my channel I also invite you to  take a look at what other videos I have published   so far for example this is the video YouTube  thinks is best for you so if I were you I would   definitely check out that video so this is going  to be all for today and see you on my next video
Info
Channel: Computer vision engineer
Views: 33,574
Rating: undefined out of 5
Keywords:
Id: aVKGjzAUHz0
Channel Id: undefined
Length: 46min 25sec (2785 seconds)
Published: Mon Apr 10 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.