Object Recognition with Jetson Orin Nano using YOLOv8 and RealSense

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this tutorial we will do object recognition with Jetson or Nano and a real sense camera using YOLO V8 whereas old Jetson Nano Works only with jetpack 4.6 new Jetson Orin Nano works with jetpack 5.1 which means that we can use newer versions of various libraries and tools also Orin Nano is much faster so it can be applied to broader variety of applications this is the first video in this channel in which Jetson or Nano makes appearance so let's see some of its specifications and compare it to the previous model the first difference is GPU Jetson Nano used Maxwell architecture but in Orin ampere architecture is used by using smaller transistors significantly more transistors are packed into a smaller area than before allowing to increase computation speed dramatically compared to the previous model or Nano can handle some problems dozens of times faster the number of CPU is also increased from four to six delivering nearly seven times the performance of the Jetson Nano the memory is doubled in or Nano this allows to run multiple applications at the same time due to upgrade of GPU and CPU performance power consumption is increased for 1.5 times this could be a problem for small robots with limited battery space or flying mechanisms with strict weight limitations in Orin Nano input voltage lies between 7 and 20 voltages this means that we can use for power supply three cell or four cell Lithium Polymer or batteries directly or Nano comes with a pre-installed cooling fan nevertheless the height is the same as in the older model so this may help in design now let's install required libraries and packages to run YOLO V8 firstly install Python 3 pip pip is a package management system used to install and manage software pack packages written in Python next update python date util Library the date util module provides powerful extensions to the standard datetime module available in Python now install the ultral litic library ultral litic requires a lot of other libraries but all the required libraries will be installed just with this one command this command installs torch in torch Vision but for Jetson we need a torch version that was built for arm Arch 64 architecture so uninstall Torch and torch Vision move to the installing P torch for Jetson platform page here we can find instructions for pi torch installation move to the prerequisites and installation section firstly copy and execute this command then copy and execute the second line in the second line we install all required Ubuntu packages for pi torch using this line we are creating an environment variable this is an example for Jetpack version 5.1.1 but we are using jetpack version 5.1.2 so we need to find out what version of Pi torch is suitable for us to do this copy this part of the link and move to the page here click on the jetpack version you are using select pytorch this is the version of Pi torch we need now we can create the environment variable be careful not to misspell the link name finally we have to execute this line but we don't need to execute all commands in this line so we will execute the necessary commands partially we have already installed NPI and sipi so copy this part and execute it next create an LD Library path environmental variable Dynamic Linker uses LD Library path for finding shared libraries before loading them to the address space of the process upgrade the prota package Proto stands for protocol buffers prot protocol buffers are language neutral platform neutral extensible mechanisms for serializing structured data install Pi torch now we are going to install torch Vision the installation process is written in this page click on the triangle placed next to the installation sentence firstly install the required libraries clone torch Vision version 0.16 even though this version is not on the list it should exist in the GitHub note that you have to replace the version strings to actual version number you want to install move to the torch Vision directory in this line build version variable is set to 0.16 by executing this command torch Vision build will begin we have successfully installed torch vision now we are going to install lib real sense go to this page clone this repository move to install lib real sense directory before executing the build lib real sense shell script we need to do one modification to avoid error during building process go to the install real sense folder open the build lib real sense shell script at line 135 add this argument this argument helps compiler find where python executable is located now execute the build lib real sense shell script note that memory amount is increased in the Orin Nano so we can set the job's argument to two to accelerate building process even though it is written that the library has been installed in the user local lib directory actually it is installed here this problem is explained in this issue but it seems there is no solution at this moment so move to the home directory open The Bash RC file move to the bottom of this file in this line change lib to off open a new terminal execute the source commands to execute changes we made to the bash RC file now we can import the py real sense to library successfully now let's see the code please download the YOLO V8 RS zip file from Google Drive and extract it to your home directory open YOLO V8 RSP script here image size format and frame rate are defined pipeline start means that we start pipeline streaming with configuration we set in the above line the alignment utility performs per pixel geometric transformation based on the depth data in these lines we Define YOLO model in this line program Waits until a new set of frames becomes available the frame set includes time synchronized frames of each enabled stream in the pipeline line here we get aligned frames from RGB and depth camera in these lines obtained frames are converted to a nony array in this line we are converting depth image to color image to add colored bounding boxes later here inference is done in this part we are converting the obtained bounding box coordinates from tensor to nonp array format in this line we obtain index of detected class here we draw a rectangle on a depth color map and in this part we put a text upon rectangle in this line we obtain inference result image from YOLO finally in these lines we create Windows with results to execute the code open a terminal and move to YOLO V8 RS directory execute YOLO V8 RS Pi script after a while two windows with inference results will appear
Info
Channel: robot mania
Views: 7,795
Rating: undefined out of 5
Keywords: robotics, python, ROS, deep learning
Id: xqroBkpf3lY
Channel Id: undefined
Length: 11min 47sec (707 seconds)
Published: Sun Oct 22 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.