Real Time Pose Estimation on Jetson Nano

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
here in this demo you can see real-time pose estimation on nvidia jets and nano pre-trained models for human pose estimation capable of running in real time on jets and nano [Music] hello everyone welcome to make to explore channel first of all thank you to all our subscribers viewers and supporters in this video let's see a new computer vision machine learning and artificial intelligence-based project real-time body pose estimation on nvidia jetson nano development board jetson nano is a small developer kit powerful single board computer that lets you run multiple neural networks in parallel for applications like image classification object detection segmentation and speech processing so without any delays let's get started and see what does actually mean by body pose estimation human pose estimation or articulated body pose estimation is a computer vision-based technology that detects and analyzes human posture it is an important problem in the field of computer vision imagine being able to track a person's every small movement and do a biomechanical analysis in real time the technology will have huge implications applications may including video surveillance assisted living advanced driver assistance systems adas and sport analysis etc the main component of human pose estimation is the modeling of the human body there are three of the most used types of human body models skeleton based model contour based model and volume based model skeleton based model consists of a set of joints key points like ankles knees shoulders elbows wrists and limb orientations comprising the skeletal structure of a human body this model is used both in 2d and 3d human pose estimation techniques because of its flexibility then contour-based model consists of the contour and rough width of the body torso and limbs where body parts are presented with boundaries and rectangles of a person's silhouette next is volume-based model which consists of 3d human body shapes and poses represented by volume-based models with geometric meshes and shapes normally captured with 3d scans now let us see what hardware we are going to use in this project here we have used nvidia jetson nano development board which is very popular som system on module or you can say single board computer a powerful developer kit for getting started with ai and machine learning this nvidia development board is capable of deploying computer vision and deep learning applications such as image recognition classification object detection localization and segmentation this board is connected with logitech c270 webcam which will give a live streaming video feed to jets and nano jets and nano will be running ubuntu os with jetpack sdk we will also use docker container so for this board jets and nano we have used network interface controller module ac8265 which features dual mode wi-fi and bluetooth hence we can connect it to our router wirelessly and can access the internet on nano we will be connecting to same wi-fi network using our laptop so to control the jets and nano through ssh you must have to connect it to the same local network through ethernet or wi-fi we will then access jupyter notebook to get started with python programming on it so this was all about hardware setup and configuration for this project now let's see all these steps in detail and prepare jets and nano to estimate body poses let's move forward and see how to prepare jetson for initial setup in display or headless mode of configuration jetson nano developer kit uses a micro sd card as a boot device and for main storage it's important to have a card that's fast and large enough for your projects to flash the sdk image we need high speed uhs one or two standard sd card it is recommended that you should use 64 gigs of card but for the time being you can use 32 gb also so we have to flash this sd card with operating systems image file which is going to run on jets and nano nvidia jetpack is a comprehensive sdk software development kit for jetson for both developing and deploying ai and computer vision applications to download this we have to go to nvidia's website here on this link you can see here they have given instructions for preparing micro sd card for different platforms like windows mac os and linux also so let's check them out but before that read note given here to prepare your micro sd card you'll need a computer with internet connection and the ability to read and write the sd cards there are different types of sd card adapters like these shown here you can connect them to your computer as per interface available again go to nvidia's setup guide here step one download the jetson nano developer kit sd card image from this link and note where it was saved on the computer you can download latest version of os with jetpack on this link for different variants of jets and nano 2gb or 4gb here the buttons given click on that download will pop up save it it will be a big file must be in gbs then next format the micro sd card using sd memory card formatter from the sd association after that use your software to write the jetson nano developer kit sd card image to your micro sd card once you flash the os image into sd card and inserted that into card slot now it's time to turning on the jets in nano and do initial boot setup means first time startup configurations like language setting clock setting username and password setting etc so there are two different types of hardware setups to perform these initial configurations one with display monitor and two without display monitor which is also called as headless mode in earlier versions of jetpack os there was only way monitor display was mandatory but after release of jetpack version 4.2.1 the second method means headless method feature included in jetpack so even though you do not have display monitor you can do initial setup through serial terminal console by just connecting jets and nano to your laptop using usb cable let's see these both methods connections diagrams in detail okay let's see how to get started with jetson nano using display method how will you connect jets into wi-fi here is the hardware setup for display method as shown in figure you can see jets and nano connected with monitor using hdmi cable then like other general purpose computer we can connect keyboard and mouse we will power jets and dev kit with this 5 volts 4 amperes dc power supply to j25 power jack jets and nano can be powered by a dc power supply with barrel jack connector or with micro usb cable let me show you here in this picture you can see different interfaces available on nano you can see here these two inputs at number three it is a micro usb port to power jetson with 5 volts 2 ampere power supply or as serial device input and then this number 8 is a dc barrel jack for 5 volts input okay then next we will also connect our camera c270 logitech's webcam connect recommended power supply and then turn on the jets and nano as we have mentioned earlier follow graphical user interface shown at startup setup and perform all first time configurations like language time wi-fi username and password settings etc we have interface jets and nano with wi-fi network interface card ac8265 this one therefore we can connect jetson to our router wirelessly and can access the internet so once you set up jetson with wi-fi you can execute ifconfig command in terminal to get your ip address of nano this is one time setup only means once you got your ip then next time there is no need to connect these monitor keyboard and mouse instead you can wirelessly log in into your jets and nano using its ip address from your laptop so this was standard method of setting up jets and nano by using display monitor but suppose you don't have display monitor then how can you set up jets in nano for first time no worries there is another method called headless mode of operation where you do initial setup of jets and nano without display in earlier versions of jetpack this feature was not available but with the release of jetpack 4.2.1 it is possible let's see it in next slide okay then in headless mode of operation you have to assemble the hardware as shown in figure in this setup jetson nano is powered j25 power jack since we require the micro usb port to access the console for initial setup here first connect this j48 jumper this will enable this j28 micro usb connector as in serial device mode this is headless method so there will be no monitor display we should connect jetson to our laptop by using usb serial method to access using serial port we will power jets and nano with this 5 volts 4 ampere dc power supply to this j25 barrel jack once you connect your setup like this then turn on the jets and nano it will be detected as com port in your laptop's device manager then do serial ssh into that port you will log in there into root then you have to list the existing wi-fi networks on youtube you will get plenty of video tutorials showing how to connect wi-fi network through terminal or cli command line interface once you connect to home wi-fi network on the same network where your laptop's wi-fi is connected find out the ip address assigned to jets and nano by your router you just need to give ifconfig command and terminal once you get your ip then you will not need to connect this usb serial cable again on next boot jets and nano will automatically connect your home network wirelessly and you can access it through your laptop wirelessly so this was headless mode of configuration we have spent some extra time here for explaining this method since it is very important for this project as we are going to use docker container and we will use jupiter lab server for wirelessly interacting with jets and nano from our laptop or pc so let's move forward and see more info about what is docker container okay in this session we will see about nvidia machine learning docker container but before moving forward let's first see what is container a container is an executable unit of software where an application and its runtime dependencies can all be packaged together into one entity since everything needed by the application is packaged with the application itself containers provide a degree of isolation from the host and make it easy to deploy and install the application without having to worry about the host environment and application dependencies it is kind of virtualization technology that makes easy for us to develop and deploy apps inside the neatly packaged virtual containerized environments so you can see here comparison between virtual machine and containers containers are more lightweight than virtual machines now let's see what is docker docker is an open source platform for creating deploying and running containers docker is included in jetpack so running containers on jetson is easy and does not require any installation you just need to pull particular docker image you want like nvidia machine learning docker container deep stream etc a docker container is a mechanism for bundling a linux application with all of its libraries data files and environment variables so that the execution environment is always the same on whatever linux system it runs and between instances on the same host unlike a virtual machines which has its own isolated kernel containers use the host system kernel therefore all kernel calls from the container are handled by the host system kernel a docker container is composed of layers the layers are combined to create the container you can think of layers as intermediate images that add some capability to the overall container if you make a change to a layer through a docker file then docker rebuilds that layer and all subsequent layers but not the layers that are not affected by the build this reduces the time to create containers and also allows you to keep them modular docker is also very good about keeping one copy of the layers on a system this saves space and also greatly reduces the possibility of version skew so that layers that should be the same are not duplicated a docker container is the running instance of a docker image you can find more information on this nvidia documentation page like why do you say container what inside it how to use them etc we will share this link in description let's move forward and see what is ngc basically ngc stands for nvidia gpu cloud nvidia ngc is a hub for gpu optimized deep learning machine learning and high performance computing means hpc software ngc hosts containers for the top ai and data science software all tuned tested and optimized by nvidia containers and ngc provide powerful and easy to deploy software proven to deliver fast results allowing users to build solutions from a tested framework here on this link you will find information about how to getting started with ngc with containers and models on ngc catalog of various optimized containers for different gpus please go through it for more details all the links will be shared in description moving forward let us see how to pull and run docker container from ngc catalog here we have used nvidia dli nano docker container let me show you on catalog here you have to search your gpu's name like jetson nano or jetson xavier here you can see they have pose demo for jets and family also but this demo can be run on agx xavier or xavier nx only not for jets and nano so we can pull base machine learning container this one or l4 tml this one or this deep learning institute course demo which we have used at the time of recording this video and then prepare it for pose estimation for jets and nano developer kit so for example you can see here in this dli getting started with ai on jets and nano container they have given prerequisites as micro sd card usb cameras such as logitech c270 webcam or csi camera internet connection for jets and nano to download this docker image and all the other stuff which we have seen earlier next let's see how to use the container first you have to set the data directory the data collected during the project is stored in a mounted directory on the host device this way the data and trained models aren't lost when the container shuts down this command below assume the mounted directory is nv dli data so make sure you create it first then go to the top here this is the pull command which you have to execute on terminal which pulls this docker container from nvidia's gpu cloud for that you need to have internet data because it is of 1.39 gbs file as they shown here hence when you execute this command docker will get downloaded to your nano and it will take some time as per your internet speed then to run the container you have to use container tag that corresponds to the version of jetpack l4t that you have installed on your jetson like 4.4 or 4.5 etc the docker run command will automatically pull the container if it is not on your system already so use this command if you have connected usb camera and this one for csi camera like raspberry pi camera etc important note here that if you have both csi and usb cameras plugged in or multiple usb cameras add device number according to that you can also check with ls device command please do not use the csi camera through v4 l2 interface so here they have given meanings of different flags use according to your system next when the container is launched the jupiter lab server will automatically be started text similar to the following will be printed out to the user you can connect to it by navigating your browser to http colon double slash local host colon 888 or you have to substitute the ip address of your jets and device if you wish to connect from a remote host that is all with the jetson in headless mode note that the default password used to log into jupyterlab is dli nano we also tried other container let me show you this one nvidia l4 tml which contains all these open cv packages with support of python 3. also they have mentioned here that it is supported by jets and nano but we were unable to launch the cameras in that docker container it was throwing error whenever camera device invoked we will try to resolve that error and use this container in next project which will be hand pose estimation and classification on jets and nano now let's see the demo let's first prepare sd card for jets and nano here is file explorer in windows you can see here we have connected 32 gigabytes of micro sd card drive which we have to format first do not use windows default formatter use this sd associations sd formatter please note here to select correct drive otherwise if you format wrong drive then data will get permanently lost once drive is formatted then let's write the jetpack image means the operating system image on this sd card so for that we will use each or software open each or software like this then select here which image file or compressed image file you want to write to sd card [Music] [Music] then again select correct drive make sure you have selected the micro sd card then press flash button flashing process will get started it will take some time wait until process finishes successfully [Music] once completed safely remove the sd card now let's insert micro sd card into jets and nano so you can see here this is the card which we are going to insert into nano here is the slot to insert the card you will notice that there is latch locking type mechanism to insert the card push the card gently it will get latched let's turn on the jets and dev kit here we will see display method first we have connected display monitor keyboard and mouse [Music] in display method during first time boot setup you have to go as per instruction given in gui configure all with default settings then select your wi-fi with proper credentials [Music] [Music] once you get ubuntu welcome screen go to top right corner and make sure that you have connected to wi-fi correctly [Music] if you have connected to your wi-fi then go to terminal and run ifconfig command [Music] you will get ip address of your nano as in wlan since we are connected wirelessly using wi-fi so this is the ip which is assigned by our router to nano our nano's ip address is 192.168.0.104 yours may be different now you can ssh login into jets and nano with this ip address through your laptop or pc but your laptop or computer must be connected to same wi-fi network so this was all about how to carry out initial setup of jets in nano using display method now let's see headless method we have connected usb connection between nano and our laptop according the diagram given then you have to go to your device manager in windows pc here you will notice that jets and nano will get listed as com device you can use serial communication software like putty and select proper com port and log into nano [Music] after you connect using putty you will see same steps which appears during graphical setup but instead of gui it will be prompted from console there you will be prompted to select language keyboard type region username password time zone and then wi-fi to connect with but you can skip wi-fi setup if you want and later use network manager nmcli method a command line tool for controlling network manager as per your convenience [Music] [Music] [Music] [Music] sometimes device will restart after setup is completed don't worry just log in with same the serial method once you successfully logged in and connected to wi-fi run ifconfig command you will get ip address of your nano as in wlan since we are connected wirelessly using wi-fi so this is the ip which is assigned by your router to nano our nano's ip address is 192.168.0.104 may be different now you can ssh login into jets and nano with this ip address using your laptop or pc but your laptop or computer must be connected to same wi-fi network here let's start new putty session using the ip address we just copied select connection type as ssh instead of serial paste the ip address of our nano in this field of ip or host name and then click on open first time it will show some warning about ssh key just click yes here then log in with username and password which you have entered during setup if all got set up well you will get one more console prompt here now you can close the other connection session as well as interface also which is the serial one so when nano boots on next time it will automatically connect to wi-fi we won't need serial connection anymore here it is recommended that you should update and upgrade jetson os before going to further setups now let's see how to download and install the docker container from nvidia gpu cloud for that go to this link all these important links will be shared on our github as well as in the description of this video here we will go to the ngc catalog of containers in this filter we have to search for name of our gpu like jets and nano xavier etc all containers related to this query will get listed down there we have used this dli nano course container so back here you can see different other containers like l4t machine learning base l4t etc but first let's see more info about this dli getting started with a ion jets and nano container it is originally built for this deep learning course here is the pull command showing link to download the container you can copy it by clicking here you should put correct jetpack version of your system currently running so at the time of recording an experiment we have used jetpack 4.5 us english version just note that then go to terminal again and paste the pull command then click enter this command requires super user credentials therefore we need to use sudo manifest unknown error will occurs when you enter wrong jetpack version different than you are currently running on your nanodev kit so just make sure that you entered correct jetpack version [Music] if everything entered correctly container will starts downloading it will take some time since these image files are usually larger in gigabytes download time depends on your internet speed okay now download got completed it took approximately 10 minutes in our system it is of 1.39 gb file let's move forward and see how to use the container here the step 1 set the data directory the data collected during the course is stored in a mounted directory on the host device this way the data and trained models aren't lost when the container shuts down let's create directory for data name it as nvidia dash data then step two run the container so here they have given commands for both type of cameras usb camera and csi camera chooses per your hardware we have used logitech c270 camera so we will run this usb camera command click enter to run the command you will see this message saying allow 10 seconds for jupyter lab to start it will show link and password to access the jupyter lab server let's copy this address and note down the password then go to your internet browser paste and go to that link you will be asked to enter password to access the jupiter lab enter password and login [Music] that's it you can see here jupiter labs launcher here with all the options like terminal python consoles etc so in this way we have seen how to set up jets in nano to run headless download or pull the docker container from nvidia's gpu cloud then we have also seen how to run docker container and access the jupyter notebook into that once above all the setup get done our next remaining task is program jets and nano to estimate body pose for that we will use python programming with different opencv and machine learning libraries let's see that in next session go to this github link where they have given detailed stepwise procedure to detect human pose which include all the required prerequisites and getting started documentation so what this trt pose project will cover it aimed at enabling real-time pose estimation on nvidia jetson you may find it useful for other nvidia platforms as well currently the project includes pre-trained models for human pose estimation capable of running in real time on jets and nano this makes it easy to detect features like left eye left elbow right ankle etc training scripts to train on any key point task data in ms coco format this means you can experiment with training trt pose for key point detection tasks other than human pose so to getting started with this project we have to go through these all steps step one install dependencies install pi torch and torch vision to do this on nvidia jetson it is recommended that to follow this guide given on this link do all the steps and then move forward to install torch 2 trt module by following these steps the next install other miscellaneous packages like these in step 2 install trt pose once you cloned and installed trt pose module then in step 3 run the example notebook means it will get listed in data folder let me show you on jupiter notebook here like this these all folders so here on documentation they have given two human pose estimation models pre-trained on the ms coco data set you have to download them here throughput in fps is shown for each platform like jetson nano and jetson xavier xavier is more powerful than nano so it has higher throughput fps so to run the live jupiter notebook demo on real-time camera input download the model weights using the links given here place the downloaded weights in the task human underscore pose directory let's see go to jupiter lab notebook again then go to files then tasks then human pose here these are we have downloaded and placed them earlier okay now let's see how to get the required dependencies prerequisites and mandatory python or opencv packages for this jupyter lab environment all these steps commands you have to run on terminal in jupiter lab let me show you go to jupiter lab here open the terminal like this then on this console you have to run all the steps means in short you have to clone the source code repository install the required opencv or python packages like numpy pytorch torch vision cython scikit learn matplotlib etc from this terminal we will not cover these steps since it is time-consuming process some packages like scikit-learn takes very long to get install so you have to do these steps on your own using python package managers like pip but if you face any issues during this project feel free to ping us on telegram or you can also send us email at info the ratemaketoexplore.com we would be happy to help now let's see our main code go to tasks then in human pose directory here open this live demo notebook this is our main code of body pose estimation code is self-explanatory and well commented each step details are given if you want to run any cell you have to input shift enter keys or you can click here also to run the selected cell before running this notebook it is mandatory to download and install all the dependencies and prerequisites which are given in documentation otherwise you may face errors to avoid camera errors connect camera before launching container here in this deep learning course container there are notebooks to test your cameras let me show you go to home screen here in file explorer you will see this hello camera folder in that folder you will see these two notebooks for usb camera and csi camera using those you can test your cameras before going to pose estimation experiment check that have you getting live camera feed or not now let's see final demo of the project here in this demo you can see real-time pose estimation on nvidia jets and nano pre-trained models for human pose estimation capable of running in real time on jets and nano this makes it easy to detect features like left eye left elbow right ankle right elbow etc so in this way we have completed this project if you face any difficulty in replicating this diy project feel free to ping us on telegram or you can also send us email at info the ratemaketoexplore.com we would be happy to help thank you [Music]
Info
Channel: make2explore Systems
Views: 3,818
Rating: undefined out of 5
Keywords: Artificial Intelligence, AI, machine learning, NVIDIA Jetson Nano, Neural network, Python, Jetson Nano SBC, openCV, artificial neural network, Tensorflow, NVIDIA, Object Detection Projects, Engineering Projects, AI Projects, Machine Learning Projects, Jetson Nano Projects, openCV Projects, python projects, TensorFlow projects, Artificial Intelligence Projects, Embedded Systems Projects, Final Year Engineering Projects, pose Estimation, Pose Detection, IoT, Hand Pose Estimation
Id: y38Mze43w-A
Channel Id: undefined
Length: 38min 59sec (2339 seconds)
Published: Thu Oct 14 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.