How to Make a Simple Surveillance System Using Yolov9 with Triton Inference Server

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
in this tutorial we will make a simple surveillance system which will send an email to us in case it detects a human as an inference method we will use Trident inference server Trident inference server can be used in various applications so it would be useful to learn how to use it let's see how our surveillance system works firstly an image is obtained using web camera then the image is sent from Raspberry Pi to Triton inference server via Wi-Fi in this tutorial we are using Ubuntu desktop as a server machine after inference is done server sends back inference results to Raspberry Pi if a human is detected in an image Raspberry Pi sends an email containing the image even though as a client we have used a usual Raspberry Pi much smaller devices such as Raspberry Pi Pico can also be used so what is tridon inference server tridin inference server is open source software that standardizes AI model deployment and execution across every workload here are several benefits and advantages of Triton inference server the first is that it supports all training and inference Frameworks a models on any major framework such as tensor flow Pi torch on NX tensor RT XG boost open Veno can be used the second is that tridon inference server performs well not only with Nvidia GPU but also with x86 and arm CPUs and AWS inferentia it maximizes throughput and utilization with Dynamic batching concurrent execution optimal configuration and streaming audio and video the third benefit is that tridon inference server is open source and is designed for devops and mlops it can be integrated into devops and mlops solutions such as kubernetes for scaling and Prometheus for monitoring let's see how a tridon server works this picture shows the Triton inference server highlevel architecture in a Triton server all models are stored in a repository model repository is a file system based repository of models that Triton will make available for inferencing inference requests arrive at the server via either HTTP or grpc grpc is a modern open- Source high performance remote procedure call framework that can can run in any environment inference requests are routed to the appropriate per model scheduler scheduler of each model optionally performs batching of inference requests and then passes the requests to the backend corresponding to the model type Triton supports a backend C API that allows Tron to be extended with new functionality such as custom pre and post-processing operations or even a new deep learning framework Triton provides Prometheus metrics indicating GPU and request statistics the metrics are provided in form of plain format and are available by accessing the endpoint now let's talk a little bit about mail sending procedure for sending mail SMTP is used SMTP stands for simple male transfer protocol simple male transfer protocol is an internet standard communication protocol for mail transmission mail servers and other message transfer agents use SMTP to send and receive mail messages user level email clients typically use SMTP only for sending messages to a mail server for relaying and typically submit outgoing emails to the mail server on Port 587 or Port 465 in this tutorial we also will use port 500 87 in our Python program we are using the SMB Library SMB is Python's built-in module for sending emails to any internet machine with an SMTP or ESN listener demon from security purpose SMTP connection should be encrypted so that message and login credentials are not easily accessed by others secure sockets layer and transport layer security are two protocols that can be used used to encrypt an SMTP Connection in this tutorial we will use TLS TLS is a more recent version of SSL it fixes some security vulnerabilities in the earlier SSL protocols now let's see programs we use in this project to create an inference server we will use this YOLO VH tridon repository for YOLO itself we will use this YOLO v9 repository get clone both of these repositories after cloning the repositories go to YOLO v9 page and download the weight file in this tutorial tutorial we will use YOLO v9 EP weights after the weights are downloaded move them to YOLO v9 directory to convert a PT file to on NX file execute the export P script here we are specifying weights Dynamic AIS option format of the file we are going to export G pu ID and offset version Dynamic axis means that Dimensions can be changed dynamically at runtime that is batch size or sequence length offset parameter is important for o and and x file conversion because within an on NX file there are different versions for a library that parses on NX files in order for the parsing to succeed the on NX file must have internal versions that are not greater than the maximum supported values after file conversion rename the on andx file to model now we have to move the model on NX file to the model repository that will be used by tridon inference server to avoid confusion we will change all sentences that include YOLO V8 to YOLO v9 move to to the YOLO v9 NX folder create a folder with the name one paste the model on NX file config pbtx file provides required and optional information about the model A minimal model model configuration must specify the platform the max batch size property and the input and output tensors of the model each model input and output must specify a name data type and shape the name specified for an input or output tensor must match the name expected by the model move to YOLO v9 Ensemble folder here we should also create a folder with the name one leave this folder empty as described in the GitHub page we should install the Trident client package Trident client is a python client library and a utility for communicating with tridon inference server now we are going to build the docker container we just have to execute these two commands building process will take about 30 minutes now let's run tridon inference server if you get this error it means that Nvidia container toolkit has not been installed the installation method is described in this page it involves three steps firstly configure production Repository then update the package list from the repository finally install the Nvidia container toolkit package after installation restart the docker we have successfully started the tridon inference server now we can execute inference using tridon move YOLO v9 Triton folder to your client in our case it is Raspberry Pi we have to modify the main Pi script change the model name to YOLO v9 Triton here we have to specify the image we will do inference on here we should specify IP address of the machine on which we are running the inference Server create the assets folder and move whatever image file you want to conduct inference on now execute the main Pi script inference results will be generated in the YOLO v9 Trident folder now let's see the code for surveillance task download the settings Json and surveillance Pi scripts from the Google Drive open the surveillance P script in the send Gmail function we are sending an email from the Raspberry Pi in these lines we open the Jason file and read a Gmail address from which mail will be sent also we read the application password that has been set on the Google page application password setting procedure is described at this page we should do these eight steps note that two-step verification should be enabled to set application password in these lines we are creating my multi-art object this module is used when we want to construct a message with varying content types in this part we are reading an image generated using open CV and create a mime image object to attach image to an email here we are creating connection with Gmail's sntp server then we encrypt it using start TLS method and send an email in the main function if a person is detected we are setting email address subject body and sending it in this tutorial free temporary email is used for testing it is easy to set up now we can use this email address for a test now connect a USB camera to Raspberry Pi and execute the surveillance Pi script we can see that we have received emails open an email open attached image we have successfully received a detected person image
Info
Channel: robot mania
Views: 570
Rating: undefined out of 5
Keywords: robotics, python, opencv, nvidia
Id: U2WRg2Rlg1o
Channel Id: undefined
Length: 14min 18sec (858 seconds)
Published: Sun Apr 07 2024
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.