Train a Custom Object Detection Model using Tensorflow Lite Model Maker | Transfer Learning

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this video i will cover how you can create your own custom object detection model using tensorflow lite model maker i will show you how to use a labeling tool to prepare your data for training the model and how to train and export your model using readily available collab interactive python notebook finally we will port and run the custom model on raspberry pi with an option of using coral usb accelerator for faster inferencing let's start with the demo this is a simple setup in which i am going to use a raspberry pi 4 and pi camera to detect these objects the pre-trained models can detect some of these objects but not others so for the other objects i have created a custom object detection model let's run the custom model first and see the output you can see that the custom model is detecting these toys but not the other objects as it is only trained to detect these three types of toys the label file for this custom model contains only three objects as you can see when i run a pre-trained model the toy objects are not getting detected and all the other objects are getting detected because it is trained for them the label file for this model contains 90 types of object now let's make things work faster by attaching the coral usb accelerator and setting this parameter to 1 in the code you can see that the fps has gone drastically up for the custom model because the model is now running on the coral hardware instead of raspberry pi cpu i took around hundred pictures of three toy objects in various position and background and placed them in a folder called train these images will be used to train the model i also took 15 pictures and placed them in a folder called validate these images will not be used for training they are only for the purpose of validation and testing the model now using a tool called direct label i marked the objects of interest in the picture when you do this the tool automatically creates an xml file with the same name which contains the position coordinates for the objects of interest this data along with the image is required for training and validation process i did this labeling for all the pictures in the data set more training data will result in a more accurate model so you can add more pictures to improve the model now your train and validate folders should have xml file associated with every picture as you can see tensorflow team has already created a collab notebook to simplify the whole process of custom model creation the best part is this interactive python notebook is accessible over browser and does not require you to install any software on your pc the link of this notebook is provided in the description below you can use this notebook as a black box where you feed your custom data and get your model as output by running the block of codes in sequence now let's create our model using it the notebook requires you to sign in do that now the code sales started executing install the required packages here i am using the nightly version as i face some issues with the default option this installation is happening in the resources allocated to you in the cloud all the imports work fine as all the packages required are in place not all cells are code cells you can edit the information here to personalize this notebook updating the image path here pulls the image from my data set in this notebook the data set is downloaded from a remote server instead of that you can upload your data to the notebook by using this upload button here upload the zip file containing the data this upload can be used in this session only once uploaded we can unzip the file using this command the folder with its content are now available for training change the training and validation information as per our data set select the model architecture from these choices for training now start the training you need not change any of these parameters we can evaluate the model with the validation set and see its accuracy parameters in just one line of code export this model as tensorflow lite with the name of our choice you can see that the model is created and available for download now when a model is exported as tf light model some of its performance is traded off due to reduction in size so we can again evaluate and see how the accuracy is affected now we can go ahead and download our model the notebook also provides utility code to check the performance of our newly created model the test script requires a url of an input image so i am specifying the path of a random image from my data set available on my server the error is because the model name is not updated let me correct it and run this cell again if i reduce the detection threshold a bit we can see that all our objects are getting detected we can run this test with different images and see the result here itself rest of the notebook helps you creating a version of the model that can run on a coral usb accelerator which is used to speed up the inferencing process just run the code in sequence and create and download the htpo version of your model you can download the code and the models created for this project on your raspberry pi using the github link in the description below run the bash script to install the necessary tensorflow lite and opencv libraries for more projects on robotics and iot check out this website stay tuned and thanks for watching
Info
Channel: Jitesh helloworld
Views: 25,733
Rating: undefined out of 5
Keywords: tensorflow lite model maker, tensorflow lite model maker object detection, transfer learning, raspberry pi object detection, object detection tensorflow lite, custom object detection using tensorflow, google coral usb accelerator, google coral, google coral raspberry pi, train custom object detection tensorflow
Id: kjuStyfl6yk
Channel Id: undefined
Length: 7min 48sec (468 seconds)
Published: Sat Aug 20 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.