Running Custom YoloV5 TensorRT engine on Jetson Nano | Rocket Systems

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
this is the engine file which we have generated from the custom YOLO V5 model we trained in our previous video um I mean we are doing the inferencing on the custom YOLO V5 tensor RT engine on Jetson Nano hello and welcome to Rocket systems YouTube channel so in our previous video we discussed how we can train a custom model for a YOLO V5 now in that video we trained our model and the model was working perfectly fine for the video we had now in this video we are going to convert that custom model into a tensor RT engine file on Jetson Nano [Music] so before we start I am now first going to clone this repository this repository contains all the files which you can use in order to convert your your low model custom YOLO model into a tensority engine so let's first simply clone this repository and let's open up a terminal I'll move inside the documents directory and then here let's clone this now in this video I am not going to explain what libraries you need to install and how how you can install those libraries because I've already covered that in my previous videos but if you still want to know how you can install all the libraries and everything so I have already provided um this setup.txt file which contains the list of all the libraries and the python modules which you need in order to you know set up and configure your Jetson Nano for tensority now what we are going to do is that so I've also copy pasted this PT file from our previous video uh this is the model which we trained our custom data set so I'm just going to uh I'm just going to move this inside this particular directory and because this Jetson Nano is already set up so I don't need to go through all the setup commands and everything so I'll just simply open this build steps.txt file this contains the list of commands which you can use in order to build the uh build your level 5 into 1030 engine so let's move inside the directory and inside this directory [Music] this is the model file which we want to convert into the tensor RT so what I'm going to do is let's first run this command generate WTS file minus W and I'll mention the model file and then the last or WTS so this command is basically so this command is basically going to convert the dot PT file into the dot WTS file so let's run this so this can take up to like you know five six minutes so we'll resume once that is done so our WTS file is now generated and which is present so this is our WTS file now next we need to do is that we need to move inside the YOLO V5 directory and then we need to create a build directory inside that so let's move inside our YOLO V5 directory and then inside this let's create a build directory and then move inside this build directory now we need to copy paste the WTS file which we converted in our previous step so I'll just do this last or WTS and then we'll compete here so inside the build directory we now have the last dot WTS file okay so we have complete the last WTS file now before we can run the cmake and the make Command we need to do one more step so inside this let's go back inside the YOLO V5 directory you will find SRC and inside inside this SRC directory you will find this config.h you need to edit this config.h and there is a variable here K num classes now because we are using our custom train model so we will have to mention three here because we have three classes in our model if you have let's say four or five then you you will have to update this accordingly now once that is done we will minimize this and then we will go back to our direct sorry terminal and then now inside this we can run cmake command sorry C make and then dot dot now this is done now we can finally run the make Command okay so the make Command is also now complete so after that we need to run this command in order to generate our model file so let's maximize this let me close this and then let's paste it here so let's rename this engine file to last dot engine and then let's rename this to last or WTN now also to make sure that because we are using YOLO V5 small that's why we are using this s but if you are using any other version It's a large then you'll have to change this accordingly now let's run this command and this might take around 10-15 minutes to complete so we'll resume this video once that is done so our engine file is now successfully built and just to recap this is the engine file which we have generated from the custom YOLO V5 model we trained in our previous video and so this is the custom YOLO V5 tensor RT engine file now this is done so let's quickly test this by using a python script okay so in order to test our model file so what I'm going to do is that I have let's go back so I have copy pasted the same twice dot MP4 video file and we also have this app.pi so let's just quickly open it and inside this our model is obviously present inside this directory and so this is the plugin and then this is the model file which is present inside this and then here we will simply replace this with the video file which we have or is dot MP4 because this is what the one which we are going to do inferencing now another change which we need to do is that we obviously need to rename this because our model file name is last dot engine now let's close this and let's try to run it now another change which we have to do is that we have to change the YOLO V5 class so if I open this particular file you will notice that this is the this is the class which we use for the pretend model but because now we don't have a pre-trained model we have our own custom model so I'm going to just put a comment here and I'll disable this and then let's copy this and let's paste it here and then inside this I will mention all the class which we have our model trained so roller is the first one and then I'll mention car and then I will mention phone so these are the three classes which we have inside our model so that's why we have changed this thing now I think that's all we need to do so let's open up a terminal here and let's quickly turn python3 app.pi and let's see how it goes [Music] perfect so it has loaded our model file and you can see it is running very very smoothly and uh I mean this is amazing because this is the custom YOLO model which we trained and this is our custom I mean we are doing the inferencing on the custom YOLO V5 tensor RT engine on Jetson Nano so this is detecting our objects perfectly fine which means all the process which we did from start to end is perfectly done and I guess that's all for this video thank you for watching this video please like share and subscribe to the channel [Music]
Info
Channel: Rocket Systems
Views: 3,801
Rating: undefined out of 5
Keywords: Running Custom YoloV5 TensorRT engine on Jetson Nano, Custom YoloV5 TensorRT engine on Jetson Nano, YoloV5 TensorRT engine on Jetson Nano, yolov5 custom dataset, tensorrt jetson nano, running inference with tensorrt, linux en jetson nano, custom object detection transfer learning, como configurar jetson nano, yolov5, qué es jetson nano, object detection jetson nano, yolov4 custom object detection, nvidia jetson nano developer kit, tensorrt yolov5, custom car detection
Id: 2NnifWGkEuI
Channel Id: undefined
Length: 8min 14sec (494 seconds)
Published: Fri Apr 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.