Running YoloV5 with TensorRT Engine on Jetson Nano | Rocket Systems

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
will notice that it has perfectly marked each person's with their object IDs and it has also detected buses persons everything and then here as well you can see it has detected persons and it has also detected the tie as well which means our engine file is working perfectly fine hello and welcome to Rocket systems YouTube channel in today's video we are going to run a YOLO V5 model by converting it into a tensor RT engine and then finally running it on Jetson Nano so in some of our previous videos we have been doing inferencing and object detection using SSD mobilate models now SSD moment models worked perfectly fine on these embedded devices like Jetson Jetson Nano or Justin Xavier but because SSD mobiles are very lightweight model so they are not very good in accuracies but YOLO V5 models they are bit heavy models but they are very very accurate in object detections now if you normally run Xolo V5 models on Jetson Nano without conversion or without optimization you will have a very low frame rate and this is why it's always a good idea to convert or build your YOLO V5 model with help from tensor RT because once you have the tensority engine from the YOLO V5 model then you will have a better performance on Jetson Nano so in this video we are going to take a pre-tained yellow V5 model we will convert that into a engine and then we will do inferencing on Jetson Nano so let's get started now I'm connected to my Jetson Nano using this no machine software so if you want you can install no machine on your Jetson Nano or if you are comfortable with VLC you can even install VNC on it if you don't know how to install no machines you can watch some of my previous videos where I have explained how you can set up no machine on your Jackson Nano and then you can install no machine on your windows or Ubuntu laptop to remotely connect to your Jetstar device now another thing I'm going to do is that I actually don't like this default desktop environment which comes with the with the Ubuntu installation so I'm going to install xfce and I would highly recommend you also install the xfc on your jets Nano because xsce is a very lightweight and it feels very fast when you are connected to your Jetson Nano using remote soft face because if you use if you are connected connecting remotely then and if you are using this default desktop environment it becomes very laggy and it's it feels like very delayed so I would install xfce if you don't know how to actual xfc you can watch my previous videos I'll put the link in the description and you can follow the videos from there so now I'm going to open the terminal and I'll first update done now I'm going to install xfce and then once it is installed then we'll resume the video foreign okay so I have installed xfce and it's a it's up to your choice whether you want to go with xfce or the default desktop environment but I would definitely recommend you install accesscv instead of the desktop default desktop environment okay so when I was doing my study on how we can convert the YOLO V5 model into a tensority engine so this is the repository which actually helped me a lot because this repository has the file switch through which I converted my YOLO model into a tensority engine and obviously this is the repository which is the official report repository of the YOLO V5 model so what I've done is that I have combined these two repositories into one repository and inside this repository you will find all the files you need in order to convert your YOLO V5 model into a tensor RT engine and this repository also contains the scripts Python scripts through which you can do inferencing and object detections on maybe rtsp cameras or the USB cameras or the video file or the image files so what I'm going to do is that I'm going to first clone this repository in the Jetson Nano and then I will explain you how you can set up and install all the libraries which we need in order to uh convert YOLO models into tensority engine okay so let's go inside our documents directory I will open a terminal here and let's type git clone okay so this is done now let's move inside this repository now before we can start the conversion process from YOLO V5 to tensor at engine we have to install a lot of libraries so I have created a file which is this setup.txt file let's open this file perfect so there are few libraries which we need to install so these are some of the normal apt libraries which we can install after that we need to install some of the Python libraries and then we obviously have to install Pi Cuda which is very very important and then after that we have to install torch and torch Vision so let's start by installing all these libraries one by one so I'll open up a terminal and then here first let's update our system okay now we have to install all these packages so let me just quickly copy all these packages and then let's paste this and hit enter so this is done now we need to install the Python 3 pip which is the package manager for python because we will be installing python packages so this is done now let's first update our python package manager okay this is also done now basic installation is done now we need to install the python packages and here you need to make sure that you exactly install these versions because previously I've done it many times and sometimes I install some other version and some some packages were not compatible with some other packages so I had lot of issues and then finally I came up with this versions of these packages so please make sure that you install these versions now one of the issue is that numpy already comes pre-installed when you install the jetpack OS so but the version is different so let's first quickly check which which version on numpy is installed so the numpy version is 1.13.3 and we need 1.19 so make sure you uninstall this version First and then only install the version which we need because if you install this word if you don't uninstall this version and install the newer other version as well then you might have few issues so I think it is permission denied let's try this with sudo I'm not sure why it's showing permission denied okay I think it has now successfully uninstalled numpy version but we need to First make sure that it's published uninstalled or not so I'll import it again and okay now because it says new module found that means it's now uninstalled so now we can install Python 3 minus mpip install numpy version is 1.19.0 yeah 1.19.0 so 1.19.0 so it's downloading the 19 1.19.0 version and it has installed so let's open up the python terminal and let's check the version now it's correct now let's install some other versions other packages as well so we'll install pandas okay so I think this is already installed now let's see what we have to install so next we'll install pillow okay so pillow is also installed next we will install pi yml okay so this is also installed um next we'll install scipy okay scipy is also installed next we have to install psutils and we can install the latest version so we don't have to worry about the version for this particular package okay so psutiling is also installed next let's install tqdm and then IM details okay so it's also installed now last package we need to install is I am utils okay so IM YouTube is also installed now most of the packages which we need for python is installed now next we need to install spy Cuda which is also very important uh for our scenario so let's install Pi Cuda now okay so in order to install Pi Cuda we need to export few paths first so let me just quickly copy this and I'll clear this and let's paste this here and then let's also copy this now we can simply install Pi Cuda by using this command now this can take some time to install and during the installation you might see some some warning or some error messages but just ignore those messages and at the end it will show you that Pi Cuda is installed now make sure to copy and paste this whole export path if you try to type it down by yourself then you might do some mistake and then it will not be properly exported and then Pi Cuda installation will probably fail so make sure to copy and paste this whole export path and then and then only install the pi code and anyways and you can anyways you know um download this whole repository and then from there you can copy all the commands which is inside the setup file so let's wait for this to install it can take up to I don't know maybe 10 to 15 minutes and then we'll resume this video okay so Pi Cuda is now installed and you will see that there are few warnings and the error messages appear but we can safely ignore them because at the end the pi Coda is finally successfully installed so this step is also done um so next we need to install C bone now you can install C bone using pip package manager but using the packet manager it takes a lot of time so the easy way to install C bone is using APTA packet manager so let's copy this command foreign is also now installed and the last installation which we have to do is torch and torch Vision now you can I mean if you are let's say in Ubuntu or any other system you can easily install torch using the paper package manager but in case of Jetson you cannot do that because Jetson is a it has a different architecture so Nvidia has released its own uh page where it they have explained how you can install torch in the tors Vision so I have included all those steps here in this file as well but if you want to follow their guide I'll put the link into the description but anyways both of the steps are are same so I'm just going to I'm first going to install torch and the version which we are going to install is 1.10 and the torch Vision we are going to install is 0.11 I have tested these version they work perfectly fine on Jetson devices but if you want you can install any other version as well so let's first go and download the torch wheel file now for this I'll move inside the downloads directory because I like to keep all the downloads in the downloads directory foreign [Music] now we can easily install this torch by using the python command pip3 install so let's copy this and paste it here okay so the torch is now installed now let's install torch Vision so first let's clone its branch okay so this is install this is clone now let's move inside the directory and then we can simply run this command to install torch Vision as well so let's copy this and let's paste this here okay so torch and torch torch Vision both are now installed and this also completes all of our requirement all the libraries in the python packages which we need in order to build the YOLO V5 into the tensor RT model uh if you want you can also install Jetson stats it's a very good library in order to monitor your Jetson Hardware performance so I'll just quickly install this as well perfect so I'll install this library and then we'll reboot the system and then we will start our process of converting the YOLO V5 into the Jetson so it looks like it needs uh the sudo command okay so once this installed I'll reboot this system and then we will begin our main actual process okay so our requirements are all complete so I'll close this file now and then inside our clone repository you will find another file which is this build steps.txt file now this file contains the list of commands which we need to run in order to convert our YOLO V5 model into uh the uh the engine file so for this uh video purpose I'm just going to use the YOLO v5s which is the small version of the YOLO V5 model but if you want you can also go with you know the large version or the extra large version but ideally I would recommend on the Jetson Nano you go with the small or the Nano version because these are the lightweight versions of the YOLO model otherwise the large and the extra large models are very heavy and they will perform very slow even after converting into the tensor RT engine so we will we will proceed with the YOLO V5 small version and I've also provided both YOLO V5 Nano version and YOLO V5 small version so you don't need to download the model separately so I think let's start by converting these YOLO models into engine file so the basic step is that we will first convert this PT file into a WTS file and then by using this WTS file we will build the engine file and then obviously based on the engine file we can do inferencing over the images and and so on so let's first open up the terminal here and then I will copy this First Command which will convert my PT file which is the YOLO v5s PT file into the WTS file so I think I've also provided this WTS file but for now I'll delete this file and uh we'll paste it again so this conversion can take up to probably like four five minutes so let's wait for it to convert okay so this is done now next thing which we need to do is we need to move inside the YOLO V5 directory which is uh which is this directory and then inside this directory we need to create a build directory and then we need to copy the generated WTS file inside the build directory so from the terminal let's move inside the YOLO V5 directory let's make another directory here and let's move inside the build directory and then let's copy the WTS file inside this build directory now another thing which we need to make sure is that obviously we are using this pretend model which is trained over 80 classes but if you are using your own custom model then you you need to update the number of classes which you have inside your custom model so for that if you go inside this SRC you will find this config.file and then inside this config.file you will find this variable here K num classes so whatever number of classes you have trained your model for maybe one two three you need to update this variable but in our case because we are using the pretend model so we don't need to do any change here so I will close this and and inside our terminal we will sorry we will run cmake [Music] and then we will run the make Command so this is done now we will run the make Command okay so this is also done now we can let's go back to our file so we can simply build the engine so I will use this command and then let's build the engine file now this engine file can take up to 10 to 15 minutes so we'll resume once that is done okay so the engine file now has been successfully built we can go inside our build folder and here is the engine file so this is now the tensor RT build engine file so this will perform better than running directly YOLO V5 on your Jetson device now in order to test whether this engine file is working perfectly fine or not let's run this command and probably with this command it will inference over the image files and then we will see if it's a if it's doing the detection or not so let me just quickly clear everything and then let's run this perfect so basically what it has done is that we have some of the images stored inside this particular directory and this image directory has you know some persons some buses and everything and then these also have some persons so we have done inferencing over these two images and the output is saved inside this build directory so if I open this bus.jpg you will notice that it has perfectly marked each person's with their object IDs and it has also detected buses persons everything and then here as well you can see it as detected persons and it has also detected the tie as well which means our engine file is working perfectly fine and this is how you can convert your YOLO V5 model in into the tensor RT engine and then you can probably run it on your Jetson Nano device so this is that's all for this video in the next video we will write a python skit through which we will do inferencing over a video file or or maybe a USB camera or an rtsp camera so that's all for this video thank you for watching this video please like share and subscribe to the channel
Info
Channel: Rocket Systems
Views: 10,431
Rating: undefined out of 5
Keywords: Running YoloV5 with TensorRT Engine on Jetson Nano, Running YoloV5 with TensorRT Engine, Running Yolov5 on Jetson Nano, TensorRT engine on Jetson Nano, jetson nano, jetson, running inference with tensorrt, improving a model with tensorrt, yolov5, nvidia jetson nano developer kit, nvidia jetson nano, jetson nano projects, yolov5 on cpu, jetsonnano, object detection using tensorflow, what is new in tensorrt, yolov4 training, tensorrt python, object detection tensorflow
Id: ErWC3nBuV6k
Channel Id: undefined
Length: 22min 40sec (1360 seconds)
Published: Tue Apr 11 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.