Tensorflow 2 Custom Object Detection Model (Google Colab and Local PC)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey welcome back Ben again today we're taking a look how to train a custom object detection model for tensorflow 2. so we're going to do this in two different ways we're going to do it with Google collab and we're also going to be able to do it on our local PC so I'll put up some time stamps somewhere over here and also in the chapters below if you want to skip ahead to just see one but first there are some common steps between both of them so we're going to do those first and then we'll split off into the two separate versions if you've never used Google collab before it's an online tool which uses Jupiter notebooks or python notebooks and you can use free gpus and CPUs online and it's really great for model training because if you don't have an Nvidia GPU like me you can make training go way faster and you can also run it in the background and not use your system resources so again we'll be going over both of those let's go okay so before we get started just to let you know if you haven't been following I have two previous videos that are actually on this topic for tensorflow 2. in the first video we installed setup tensorflow and the second video we showed how to detect on videos so if you want to follow along with those two I recommend looking at those two first since we're going to be using the same environment for a lot of the setup but otherwise feel free to follow along anyways so the project I'm in right now is actually just a copy of the same one we've been using the two previous videos same thing with my conda environment I'm just using a copy and that's just in case because that's just how I like to do it I like to make copies when I do like another tutorial but otherwise for your case you could be doing this in the same directory that you made for the other ones so the first thing we need to do is collect images because we need to make a model and to make a model you need to train something or you need to look at something because otherwise there's no model to be have so I'm going to make a model on Legos because I like Legos so I've already collected a bunch of pictures and I'll show you some of them here I've just downloaded a whole bunch and you can see we just have a picture of some bricks I just wanted to do three different colors of the two by four bricks so you'll see I have a whole bunch and some different colors you can scroll through these whole bunch so it's a good idea to have like 200 or more pictures because the more pictures you can provide as training data the better your model is going to be and I'll give you a warning right now that my model is not going to work that well just because I'm not I don't have that many pictures and I'm not gonna be training it that long just for the purposes of this tutorial but if you were to do it with your own objects that you wanted to train on get as many pictures as you can and the general rule of thumb is that about 80 percent of the pictures you have are going to be kept in a train folder and 20 of them are going to be put in a test folder so here let me grab mine this is another just kind of prep that I have I'm going to copy in the ones that I have already into this project here and you know what I'm actually going to put this into an images folder that'll help us keep a little more contained images and I'm just going to put these back in here okay nice oh and it was so nice and opened all these pictures for us okay we can close those don't need that okay cool so now we have an images folder with our test images and our train images so you'll also see that there's some XML files in here and that's because I've already skipped a step so what we're going to do is that we're going to use a tool to annotate all of our images because we need to tell the uh the training like hey in this picture this object is right here so to do that we're going to use something called label image and so to download that we're just going to do pip install label image just like that and it looks like I already have a copy of it so then to run it you can just run label image and we should get this nice window so what we're going to do is that we're going to open the directory that our images are in again you might have yours in a different place and I already have mine sorted out but if we're going to go into train per se we're going to select this folder and you can see that we start to get our our images in here I also recommend changing the save directory over here because otherwise you're going to have to verify every time where you want it to save so again I'm just going to have it save in the same place so we're reading from test or sorry from train we're reading from train and we're going to save and train so now we need to annotate each one to do that you can either on this left here click create rectbox or if you press w that'll automatically bring it up for you so all we got to do click and drag to our bounds just like that and I'm going to call this one blue underscore brick like that because this is a blue brick you can label these however you want but that's just how I choose to do mine and I'm going to control s for Save and you can actually see down here we downloaded an XML nice so now if you can either do the next image or previous image you can also use the A and W key so d forward backwards and you can see all of these are actually labeled already because it's pulling in the old annotation data already did but you're going to do the same thing we did for the first one we're going to do W select and drag and add so you can see this one's a red brick and I labeled it as red this one's green and we'll get to one that has multiple yeah so like this so if you have multiple of the same object you can just create keep creating more boxes and label them all as blue if you have different colors or different objects you can also put them in the same one I should have an example of that yeah so this is a red brick this was a blue brick and you can see we have both in one image cool so that's the gist of that and you'll see that we're going through this file list and you can keep creating them so go through all of these until you've annotated them all and do this for both your test and your train images you need to do it for both close that so you'll see in my test the one that has the fewer amount of images you'll see that I still have the XML for those too and for train you'll see they all have the xmls and I'm actually going to delete that extra XML I actually don't see where it is so maybe I won't we're just going to leave that in there I'm sure that'll be fine Okay cool so now we need to have these xmls files actually converted into CSV files as that's the uh format that the uh tensorflow wants it in so I'm going to bring over another script I have and all this data and scripts I'm going to be using today are also going to be on my GitHub repo which I will be sharing in the description so I'm going to paste this in here and I just have a script called XML to CSV and as you can tell it's going to take XML files and convert them to CSV files which is pretty easy so this is going to look for a folder named images that has the folders test and train underneath it which is why I put it in that top level images so I'm going to down here I'm going to make sure we're in the right directory which is models research object detection all right and now we're just going to say python XML to CSV and there we go and if we look inside of here yep you can see it's updated and now we have test labels CSV and train labels CSV so that part's pretty easy all right next script we need let me close this is a generate TF record this is kind of like the training file we're going to need so let me pull open a copy of that too again this one will also be in the repo from which you can download paste okay now for your purposes or for your model you will have to edit this file this file has this class text to int and what this is is that this is where your labels come back and play so you'll see uh if row label equals blue brick it's a one if it's red brick it equals two so for you if you were saying training on like macaroni or maybe like different types of pasta maybe you had like macaroni and like shells and spaghetti uh and here you could have like shells there's one or like penne is two something like that uh it's pretty straightforward I just make sure you're using the same labels that you used in your training uh because it's going to look for these names so if you remember I specifically said blue underscore brick so this is going to match up with this it's going to find the text the order here doesn't relate to the XML or the the emit label image that we used just as long as you have them all here in our next step when we do the label map the order will matter but for right here just add these if you have more labels than me you can just add another line right here if you do another row label equals maybe we had a uh a yellow brick whoops and then you would just return the next number so in our case that would be four just like that and if you had less uh then you just get rid of it so if you only had two just one and two but in my case this is already set up for my model so we're gonna have three just like that nothing else in here you should have to train or change sorry so now we're going to run this script and you can see I actually left in the commands we need to use so the command is python generate TF record and then you have to pass in the input for those CSV files we made and we're going to do this twice one for the train and one for the test so this one gets it for the test labels the image directory directory of where those images are and then recreate an output which is going to be a new file which is test dot record and same thing for train so I'm going to copy this paste it down here and we're gonna run it might take a moment again you might need to edit the command if your paths are a little different but if you're doing this along with me these paths should be fine so it looks like this finished and if we go down here yep you can see a new file down here test dot records that's great and now we're going to copy this one which is for the train enter that and that should run all right if I refresh over here yep now we have test.record and train.record awesome cool next step is that we have to download a pre-made model to train our model so it's kind of like using the work someone's already done and piggybacking off of that which is kind of how all new models are really made because if you had to start from the very beginning would have to relearn how to even detect what an image is if that makes sense so if we can just start from using a different model it makes our life a lot easier so we're going to go to the tensorflow model Zoo which we've been to before in our other videos so in here we can pick a model that we want to use and so in this column there's kind of like how fast the model is compared to its relative performance and you can pick a different one if you like I'm going to use this efficient depth one right here and whenever you pick one I'm going to right click on it and copy the link address just like that and we're going to go back and we're going to use the same file that we've used in our previous tutorials which is our model downloader and we're going to do the same exact thing that we've done before we're gonna put our Link in here and we're going to update this tar file to be the name just like that just like that we have our two names in there so now we're just going to run this model downloader you can either run this from the command line or depending on your IDE I'm just going to run it from there and it might take a minute no such file oh no I wondered if it even downloaded it let's check it out okay so I did download maybe I copied it wrong let's copy the name of this we're just going to copy the file name yeah looks like I was missing a letter uh let's run it it might try to download it twice now but we'll see okay there we go yeah so I downloaded another zip or a gz which is fine but now we should also have this folder up here so you can see one of my old models right here the fastener rcnn but now we also have our new one which is the efficient debt very nice okay now that we have this model we need to update some config files that we're going to use so we need to go to our Audrey's detection folder which we're already in we're going to go to uh where is it config sorry and then we're using TF2 so I'm going to go to TF2 and then we're going to find our config file in here so we're using the efficient debt d0 elsewhere look for oh there it is here's ours I'm going to make a copy of this and I'm going to put it under the where are we've been putting of our stuff so one layer up and we should get this nice looking config file so there's a number of things we have to change in here so the first thing we're going to change is the number of classes and so right here you can see that this model originally had 90 different classes in it in our case we only have three because we have the three different Lego bricks so again change this depending on what you have so if you have less if you have more just make sure that this number is the same as that now we have to change the fine tune checkpoint which kind of tells it where to start training from so you can see right here it actually has a path to be configured right here so in this case we're going to use our new models checkpoint so if we go to that folder we have the efficient debt the new model we downloaded if we go to the checkpoint folder we want the path to this right here this checkpoint zero because you can see these electric points are right here too so we're going to use this one so we're going to copy the path to it uh which path do I want we'll get the content root one we'll paste it in there okay and when you're putting your path in here make sure that all the uh slashes are forward slashes and not backslashes otherwise you may get an error about something with truncating if you're seeing that air either try a changing your path to be the absolute path so like where it starts at like C and then like users and then blah blah blah and or make sure that these slashes here are facing to the right like that nice so now we're going to change the fine tune checkpoint type to detection and now we're going to change the input path down here input path right here okay lots of things to look at so under this input path this needs to go to our train record and we're going to do the same for the test one that's in here too so for the train input reader the input path should be to that train dot record we made which is right here so I'm also going to copy the content root and we're going to put that in here and then for the truth sorry test it's actually going to go in here where it's eval input reader it's a little bit of a different name but this is the one of our tests we're going to copy the content root oops too far put that in there and test that record Okay cool so now we're going to come back down here in a sec but one more thing we want to change is this batch size right here so if this is kind of how intensive and how much resources you can use on your system at once specifically with the number of images I believe it can look at once so if you have a Nvidia GPU or a Cuda core enabled GPU you can make this number higher higher but if you do not have one of those gpus like me you need to set this lower as it's going to be using your CPU it'll be using your system memory so I recommend going as low as you can first so I'm going to put a 2 there because I've done this before when it's even at like eight and it will completely like freeze your computer more or less because it uses all your memory and all your CPU at once and that's uh not a good time down here we also have a number of steps this is how many steps it's going to take in training uh you can change this to any number you want really I like to keep it at the default and kind of monitor the training but there is such thing as over training which we'll talk to a little bit later so if you think you're going to be over training you can feel free to turn this down so the training doesn't go this far cool now the next thing we need to make this is a label map path as we kind of skipped over this line so the label map path is where we're going to kind of create a label map so in our previous video as you might remember we had to find our a path to a label map so we're going to make our own so I have another copy of a label map I've already prepared and I'll show you how we use it so let me paste it in this same directory let me paste it you can either call it label underscore map or just label map but the important part is the pbtxt on the end and you'll see that a label map is basically going to look like this so for every item or object we want to detect in our model you're going to have a new item block kind of like a little code block right here so in this case this ID and name has to be the same as the ID and name that we had in our generate TF record so you'll see here blue brick to one red brick to two green brick to three so we're in our label map you also want to do blue brick one red brick two green brick three you want it to be the same just like that then again if you have more just add more item blocks just like this or if you have less go ahead and remove one and again I'll be having this example label map on my GitHub also so now that we have this we also want the path to here so let's copy the path again and now we're going to put it in our config file so where we had the label map path as you can probably guess we're going to put the path to our label math so there's ours and this is going to be the same on both okay cool now we have everything we need to start training on our local PC this also means we're ready to technically train on google collab too but since I'm already in this environment we're just going to continue from here and then we'll kind of switch over and go backwards a little bit and train on google collab so if you want to skip to seeing that I'll put a time stamp somewhere or again in the chapters in the description so now to get training I'll have a command we're going to use and I can put this command written out in the GitHub also if that makes it easier so we're going to be using this command that I'm currently copying over all right so we're running python model main tf2. Pi which is a file in this object detection uh directory that we're in and then you have to choose the pipeline config path so the config is this file that we just edited and this is the copy that we had made that we were editing so this copy is right here which is also under an underneath our object detection so if you have it somewhere else just make sure you change the path but since we're in the same directory I just had it straight up directly to that config file for the model directory this is where it's going to start putting uh training checkpoints so I like to put it in a training folder oh you don't have to make the training folder ahead of time it'll make a training automatically for us and then this is just going to log standard errors for us so this is good to have and with that we should be good to go now when you this will take a while to start um oh and we already ran into an error so let's see what our error was failed to open our label map okay so I had a feeling that these paths may be a little problematic so let's go back to our paths and it says he can't find the label map which is fine let's make sure our label map is actually in the right place so we put it here which is under object detection okay let's get rid of this then we're going to do it both in here and I think because we're in the same directory should be okay and I'm actually going to do that for these other directories we had to again this may be different depending on where you have your own things so I recommend just trying this out and seeing how it goes okay let's try it this time it might get mad at me for removing those other paths now but I guess what to find out so far looks good all right I think we're good this time so you'll see there's gonna start being all sorts of crazy text that starts going on and you may see some warnings and even errors but as long as it keeps moving we should be okay and again this is going to take some time to even start training and you can see we have a new error there or warning I'm actually gonna open up my task manager as we might be able to start seeing how intensive this gets on our system but I think for the moment I'm going to leave it here and come back to you guys once it actually starts training because again this can take a few minutes okay it looks like our training has actually started so you can see down here you'll start getting these Big Blocks with uh like loss loss localization classification and the number we're really paying attention to is this one right here the classification loss this is kind of like how good we're getting to actually being able to detect our training images and models so you can see my CPU and memory has started to kick up and we're using a pretty high percentage of my Ram and most of my CPU for that batch size we might have been able to kick it up to four but again this is why I don't like doing it at my PC because I basically can't do anything else on my computer while I'm training because everything gets so slow and which is why I like Google collab which we'll show in a minute but over time you'll see more of these steps show up and don't worry if you think it's frozen these steps only show up about I think it's every 100 steps I think it's every 100 steps so you'll see after some more time we'll get like step 200 or it might jump up to 500 and we'll be checking this classification loss so while we wait for that we're also going to bring up another tool that we can use with this so this is just another terminal and this is uh anaconda in the same uh base environment I'm using for Anaconda so you'll see down here I have a model tutorial model tutorial so this is using the same area and you can see actually we just got on new steps you can see that loss classification loss is now down to 0.6 which is great so we're actually going to root or sorry CD into the directory our projects in so we're going to go I gotta remember everything is now yeah I'm in here pycharm projects and then I called this I have a lot of different ones of these model tutorial here we go and this is where we do our models research object detection there we go and now we're going to use tensorboard and I actually don't remember the command for us what we're going to do is that we're going to do a little sneak peek of Google collab real quick and we're going to grab this don't mind me we'll come back there in a second and in this terminal we're going to run this command which is tensorboard and what this does is it actually launches like a little mini web app where we can more visually see our training if um tensorboard the command doesn't work for you a make sure you're in the same environment that we're doing all this other code and work in and B you can try doing pip install tensorboard if that still doesn't work here let's look so tensorboard 2.10.1 at localhost 6000. awesome so we're going to copy this and if we go to that directory you can see a little web app has started and you can see uh it actually doesn't have anything to show yet as we need to wait for some checkpoints and more training to happen so if we go over here you can actually see our training folder has appeared now and some things have started showing up in it uh so right now we only have one checkpoint and eventually we'll get more as this training continues and once we get more we should be able to refresh this page but it looks like we need some more so we'll let that run for a little bit or actually it's got a new one so you can see kind of stagnate it a little bit but that might be enough for tensorboard let's find out nope not yet okay log directory training train yeah I think it just needs some more so we'll wait on that for a second and then we can come back or you know what actually we'll be able to view it on Google collab too we'll we'll just run while we're doing that all right cool so let's let's switch over to Google collab so we can see how to do it over there all right so welcome to Google collab I'll have this file in my uh GitHub repository also so you can use this uh this code is very copy copy pasta from a lot of other repos and even the official tensorflow I've trimmed this one down and added it into a way where I like it but uh you can always feel free to use other ones that are out there so if you never use Google collab it's kind of like running little code blocks one at a time so the first thing we do is connect to our runtime up here so we need to get access to some CPU cores a GPU and RAM and we're still initializing now we're connected and now you can see we have some stats over here so we get we get 12 gigs of RAM to play with which is pretty nice and we have a GPU to use which is also very nice so let's close this for the moment and over here in runtime we're just going to make sure that under change runtime type that we're under GPU that's just going to make sure we're fully utilizing the resources we have all right so now to get this started whenever you open Google collab you're going to have to rerun the things you've had before because we kind of have to this is like setting up a new environment every time almost so what we need to do is that we need to go through these code blocks one by one and get tensorflow installed on here fortunately for us it's pretty quick we have four cells we need to do so we're going to do we're going to run each of these and you can see if you click play it'll start doing all this and what I'm going to do I'm going to queue up these next ones um a lot of this is very similar to what we did in the first two tutorials or in the first one actually where you download the GitHub repo and tensorflow itself so that one's running while it's running I'm gonna queue up the next one and do it for this one too so these will run once the others are done okay looks like those initial setups are done so we'll just go back and verify that nothing broke uh you'll normally get this error this is okay don't worry about it uh these two looks like that worked and these ones run the tests that we also ran in that first tutorial it's the same thing that happened we're just making sure it's installed correctly over here on Google collab so looks like all 24 tests ran so that's cool all right so now we get to pulling in our data so we're going to use something called kaggle which is like a data or a data set website and so I've created a data set already that we can use and you can make your own too so in my data set I have my test images including all the xmls in there the train I have some files in there because we're going to need to use these two I don't think we actually need the XML to CSV anymore but we're going to bring those with us so I'm bringing the TF record our label map and then our two CSV files so if you want to create your own uh you can make an account on kaggle it's free you just go to create and then a new data set and then you can just drop all your files in here and you gave it a name and that's really all there is to it so if you make your own or if you want to use mine that's all we need the only other thing we need from kaggle is that you need an API token key so that you can connect to it from Google collab so up here in the corner we're going to go to account uh make sure if you end up on profile just make sure you're on account and then we're going to want to do create new API token and then it's going to save a file for us and in here I'm going to show you my key don't worry I'll get rid of mine so we need these two pieces of okay thank you for this code we need these two pieces of information your username and your API key which we're going to need in a moment so that's all we need from kaggle uh well we might need the name of the repo we want but we'll do that in a second so we're going to run this block and as you can see in my notes this step for some reason fails usually you still need to run it it'll say it fails but I think it actually works it's kind of weird um you'll usually get some sort of like a um setup error I wonder if it's working today oh you know there's an error let's see what it says yeah there's usually an lxml error so yeah don't worry if you see this it took me forever but it turns out you can just continue for whatever reason it's fine uh so yeah let's just go on from that so now here is where you put in that information we just got so put your username in here and then your key so let me grab my key and I'm going to toss that in here get rid of the brackets whoops somewhere like that and again I'm going to delete my key so don't try to use mine make sure you put your own key in there and then we're going to run this this is this kind of setting variables for our system and now we're going to use kaggle to download a data set so we're downloading my data set so if you don't want to use mine and you probably don't you put in the username of the of the person who made it so that's probably yours and then you put in the name of the actual data set so over here if I go to my data set it should have the name in here or maybe not I think it always is going to be the name and then it put dashes in it yeah that's just how it works trust me on that so yeah if you have your own uh if you had something like um macaroni detection it would do macaroni Dash detection or if you just have the one word macaroni then it would be like that but when you do it oops I just make sure it's the username of the person not your well yes it's probably your username and then the name of the data set so we're gonna run that and I have a list command here just to make sure that we actually download it because I'm going to download it and then unzip it so you can see in our directory we have all the files we had in there so we have the generate CF record our train our test our label map and our csvs and we also have some things in here that come with Google collab so that's cool nice so now we're going to get our uh generate TF records and if you don't want to put those files in like the generate TF record and like the test labels if you don't want to do that through kaggle you can always download it through your Google drive or you can upload it if you go over here you can actually upload a file or you can connect your Google Drive and upload them that way I just thought it was easy since I had to make a data set anyways I figured I'd just put all the files in there so that's just what we're doing um so for our purposes we're just gonna have it come from our data set so this is the same stuff that we just did on the local PC if you were watching that so it's the same command generate TF record or train labels the directory train.record test out record so we're going to run this and this is just going to create both of that train that record and test that record that might take a second and there we go you can see create a train that record and test out record under the content directory and now we're just going to set these variables to again this might change for you if you have your things named differently but they're probably the same especially if your label map is a label underscore map make sure it's like that um but again these are just like setting variables so all we're doing is saying that this path is equal to train dot record this path is test that record and our label that may label map path is labelmap.pbtxt and again this one came from our data set that we downloaded all right now we're going to configure training and so this is where we can actually be a little bit more liberal with that batch size so since we're using an actual GPU this time I'm going to keep it at 16 and you'll see that the training should progress much faster for them they have 8 000 steps and 1 000 eval steps I like these and these are all good to me I'm fine with that we'll use that so we're going to run that make sure you run these even though it looks like there's not much code happening these are setting values for those variables so in here we're now downloading the model so we also did that on our local machine um it's the same thing we just copy and paste the model from the model Zoo this is the same model we were using so if you're using a different model just make sure you're pasting in the correct link and then the same thing here for the name so we're going to run that might take a moment so we download the model and we saved it how are we unzipped it right nice so now we do the same thing we also did were our fine tune checkpoint is now this new checkpoint that we have so we're going to run that I like the Google collab method because everything's already set up and you can kind of just click a run for most of these all right so now we have to do another wget we're going to grab that config file from the object detection configs TF2 if you remember we did that with the local machine this is like the same place except we're just directly grabbing that file from the GitHub source so it's the same thing we did because grabbing the one file and now we're just saying that our path is just that name so again you might just need to update these names if you're using a different model otherwise same idea and now all this does is just updates the file with these variables that we've been setting so you can see in here config a resub so we're editing it so the find to check fine tune checkpoint is now our new variables that we made so we're just going to run that and this just shows us the file afterwards just kind of to confirm that everything was updated so you can see our input paths label maps are all good now all right so now last thing we just have to say the model directory and the config path uh this one you keep the same and this one I'm also going to keep the same again you can feel free to play with these settings by doing it this way should work for us so now we're going to run tensorboard before we train so you can see this is what tensorboard normally looks like you know our actual tensorboard from before might be working now maybe not it's a little finicky on local machine I've noticed so don't feel afraid if it's not working for you you can still monitor the progress and here you can see we got some more steps but it is going quite slow on a local machine but don't be afraid if this doesn't work it is kind of finicky but on here it should work so we're actually going to run tensorboard before we start the training because otherwise we won't be able to run the um the second code block so you can see this is what it normally looks like it'll show you the different graphs we have for classification this is from a previous run that I've done so we're going to run this and it's going to basically start the web server inside of this code block which is pretty cool and this might take a second to load come look at that in a moment again I hope it works tensorboard can be really finicky it's kind of annoying all right so now this is the actual training step or command that we used it's the same one we used for local PC just with all the paths set up and all we gotta do now is run this we should be good to go and we can look at what comes next oh you can see tensorboard loaded so we're gonna get the same message until we get some more training done um so that's why I like to run this one first because we can just refresh it right here so this again will take a few minutes to go so I'll meet you guys back when that is ready okay I forgot to record the transition of it actually loading here but you can see that we're now getting the uh training steps coming in so this is the same type of output that we're getting with our local machine training so that's great um so now it's basically a waiting game uh we might be able to start tensorboard let's take a look I'm assuming it's probably going to tell us a nail oh no here we go all right cool so it doesn't have the graphs yet but it will show us what pictures we're looking at which I think is really cool so these are the uh the samples that it's currently using to train which I just think is kind of neat to look at but otherwise the next step is to just wait you can see I already got a new one in down here so run from 1 to 0.5 which is a pretty big jump if you clip whoops if you click up here you can actually see our usage you can see oh we got 15 gigs of GPU Ram or vram really which is nice so this is much better than uh running it on local we're already going a lot faster you know for comparison uh so let's see how our local machine is doing so you can see on local machines jumping back and forth so it was going down 0.4 point four point three point five point four and then up to back to 0.9 so it's going to take a while on PC and I'm actually going to stop it on here because I already have a model trained if it will let me stop you can do control C to stop it or sometimes it gets a little stuck up there it goes if that doesn't work you can just close the terminal up here okay we'll stop but you can close it and that will um also close the session so that's good um this one we'll let it run for a moment actually just so I can show you the next uh actually I think I can run it now what's running now yeah so the next step is going to be the same for both uh local PC and Google collab just in their respective environments uh so we need to actually export our model and ferns graph so we'll let this run list one run for a few more seconds sorry and let's go back over here because again this is going to work the same so now just to export it we're just going to use this command I'm going to paste in and let me get rid of these things over here and again I'll have these commands uh online um so in this one we're just running this exporter script and we're passing in the train checkpoint which comes from that training folder that automatically got made uh we're giving it the config file that we made and I think this we're gonna actually have to change that because I believe our file is not in training and then our output directory so this means it's going to save it to a folder called inference graph and you know before I forget let's update this uh Command right here because I gotta get rid of that training there we go so let's run this and I believe this command takes a minute to because it basically gotta spin up the stuff again and we'll see if it starts making the inference graph I think that's what we call it inference graph folder yeah so we'll give that a second um so this is gonna it's kind of like compressing all of it into a uh our new model and you'll see the output format of it is very similar to the model that we that you would normally download like our efficient d0 our faster rcnn we get a format very similar to those uh so this is still going let's take a look at Google collab oh yeah so you can see the Google collab our training is already much more efficient we're going from 1 to 0.5 to 0.3 I don't think I mentioned this but when you train you generally want it to get down to 0.1 uh 0.05 if you can if you get down to 0.1 and it kind of stays at 0.1 for a while that's when you should stop training you don't want it to get too low otherwise you'll over train it which means it'll get really good at detecting it but only in the pictures that you use to detect it so it won't be good at doing it in like different situations it gets too used to the exact photos that you used so that's why you want to stop it between like 0.1 and 0.05 let's see if tensorboard we'll show us all those graphs yet looks like they're here all right if they load in yes you can see we only have like three or four points on here but like you saw when I first pulled this up you'll eventually get kind of like this downward like log graph almost and once it kind of starts to plateau that's when you should cut it off but again we're just going to end it early here since I already have a trained model of this and we can still use it so let's stop it here well that stops we'll look over here so we'll see that it finished so now we have a new folder over here inference graph and you can see how it looks just like uh this pre-trained model we have a checkpoint folder a saved model folder pipeline config and the big thing that we want is the saved model and then down in here we have savedmodel.pb and you might recognize that from our uh webcam detection it's the same thing so let's go to Google collab looks like this is still stopping it might take a second it can be it's it doesn't like to stop there he goes okay all done cool so now this is going to be the same thing whereas going to Output it to something called an inference graph so we're going to do the same thing run it over here and when we're done with this I'll show you guys how we can use our new model to detect images and uh in photos or detect objects and images and on webcam just like how we did in the first and second tutorial okay looks like our graph has a finished exporting which is nice so now we're going to download it uh you can download it here that's straight to your local computer uh because you're probably going to be using it on your local computer after you train it so I put some blocks in here that lets us do it uh doing this we'll just zip that folder that we made so like on our local PC we had that folder the same folder structure got generated in here so now we're just going to zip it up and download it and you can see you can see we have this little progress bar this can also take some time depending on um how big the model is especially how if there's like the more images the bigger the model itself is going to be so I can take some time I also have this optional code Block in here where you can connect your Google Drive and copy the training data as I can show you guys you can actually remember me it's at that fine tune checkpoint you can set that fine tune checkpoint to a checkpoint from your training so if you're say training it on your local computer but you need to stop because you have to do some other work you can stop the training and start it again later by just passing in the most recent checkpoint so you know for example so in here the fine-tuned checkpoint is originally the checkpoint from that efficient debt checkpoint checkpoint zero some from that one that we downloaded right but from our new one when we're training it we get training and then we get new checkpoints so you can pass in the most recent one you have so instead of it being the efficient debt you would say from the training folder uh from the yes from the training folder grab checkpoint two and you could put that in right there so this training folder also gets created on Google collab so that's why I give you that option to copy that training folder to uh your Google drive because you could always pull it back in later so this is very useful if you want to start training again if you have to stop for some reason so let's check this out still zipping that model here we go so now you can see it'll let us download it I'm actually not going to download it because I already have one so we're going to close that and let's go back to this project and I'm just going to copy in a model I've already created because I trained one previously so we're gonna paste it in here into our object detection and copy pasta and so uh these are just some like the files in here it doesn't really know what to do with them so now let's see so let's detect it um let's test it on some images I can't use my webcam right now because for some reason OBS gets sad if I check oh tensorflow doesn't like it if I have OBS running with my webcam sometimes so we're just going to test it on images but you can do it the same way we did in the second tutorial with our new model and I'll show you how to do it so we're going to open up our detect from images and we need to give it some test images actually because a test image is what we used before we're just for random things so now what we want to use is some that would actually be related to our model so pictures of Legos in my case so what we want to do let's open up the test images folder I'm actually going to open it in File Explorer test images and I have a folder already called Lego test images and you can see I have a couple pictures in here and as a reminder like I said the model I trained did not have nearly enough pictures in it and I did not train it for enough time so we're going to see some kind of funky results but you'll see that it does detect some of the Legos which is cool so we have this in the Lego test images all right now we're going to use this command uh we're going to take well really we're only going to use this beginning part of this command yeah let's take this so we're the script and then the model path is now going to be our new model so let's take a look at it so we need to get this saved model well technically we need to save the model folder so we need whoops new model slash content slash inference graph slash saved model there we go and now our label map is it's going to be that label map we made mine is in the same directory as the object detection uh like in here so I don't need any more path in front of it and now we need to test images so we're going to say whoops test images and then I have the Lego test images like that and we're gonna hit run and that might take a moment and while we're doing that actually I'm going to open up uh that directory again it's a little bit easier to see oops I actually have it open already so that's still running uh depending on how many uh pictures you have in there it'll take longer to run so we have it in our we should have an output here it has outputs and we can see it's done so let's take a review so look at that our red Lego brick that's pretty good right 99 confident I'd say so uh you can see it gets uh depending on the size of the image they get a little overlapped you can see it found the green brick it definitely found the red brick it thinks this black one is something and there's some others in there again this is a result of me under training it you can see it's definitely working for the ones it knows let's check this last one this is a green brick yeah the label just gets cut off because the image size but yeah that works pretty well for not training it for super long again I trained this the other day but not enough images and not for enough time but um yeah our model Works which is great and yours will probably work even better if you train it for a long time and yeah so um we can do this with the webcam too the webcam is going to be the same way with this type of command uh so from the webcam one you just have to pass in the file we're using which is uh the webcam detection script and the model so the model will be the same thing that we said for this one where's the command so we said the model was that new model content inference graph saved model and the label map will also be that new label map we made and then that's all we need for that one so yeah that would work on our webcam but I know the webcam is not going to work for me right now but it should work the same way and we saw it work with the images so yeah but yeah it's pretty cool that's all it is to take a model to make a model and I know I say that after this is probably like an hour of recording or so by now but uh yeah let me know if you have any questions or problems in the comments I always down to help people out in the comments and uh if you guys help out each other I can make a uh a list in the comments I'll make a pinged comment for uh common fixes as you know somebody who always find a new error that shows up as I know I always do when I make stuff like this but otherwise uh thank you for watching and I hope to see you in the next one see ya foreign [Music] [Music]
Info
Channel: Lazy Tech
Views: 39,211
Rating: undefined out of 5
Keywords: tensorflow, tensorflow 2, object detection, tensorflow 2 object detection, object detection api, numpy, protoc, protobuf, python, machine learning, object recognition, custom model, webcam object detection, detect objects on webcam, webcam, ai, Artificial intelligence, Neural network, Custom model, custom object detection, model training, google colab, nvidia, cuda, lego detection, how to make a custom model, kaggle, Image classification, Anaconda, Video object detection
Id: 8ktcGQ-XreQ
Channel Id: undefined
Length: 56min 47sec (3407 seconds)
Published: Sun Feb 05 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.