Official YOLO v7 Pose Estimation | Windows & Linux

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello everyone and welcome back to my channel in today's video we will run official yellow v7 pose estimation the tutorial will cover setting up the environment and we'll also make some modifications in the code to show fps and remove bounding boxes around the detected persons the updated code with all the modifications will be available for our patreon supporters link is in the description let's get started first of all we need to install prereqs and setup yellow v7 pose estimation environment let's start by installing visual c plus build tools for windows if you do not install it you will get this error during the installation on linux you do not need to install it head to the official microsoft website and download the setup run the downloaded file it is going to take a while before you can select components to install just select desktop development with c plus and then click on install during the installation process selected components will be downloaded and installed once done you can choose to restart the computer if you like i will proceed without restarting next head to the official yellow v7 repository for post estimation it is a different branch than yolo v7 object detection one just download the repository as a zip file and extract it i like to move the extracted files to the main folder here we have all the files and folders now open anaconda prompt and head to the extracted repository let's create a separate virtual environment with command conda create minus n yellow v7 underscore pose which is the environment name and python equals 3.9 hit enter then type y hit enter once again and wait for the process to finish once done let's activate the environment by command conda activate yellow v7 underscore pose the base environment is now changed to yellow v7 underscore pose from the clone repository let's open requirements.txt file and just delete torch and torch vision if you like to install pytorch gpu later on otherwise leave it as it is but you will be using cpu for inferencing in this case i have gpu so i am going to delete it and save the file now lets run the command pip install minus r requirements.txt wait for all the installations to finish now we go to the official pytorch website click on previous spy touch versions and scroll down to version 1.11 copy the pip command for cuda 11.3 and paste it in anaconda prompt hit enter and it will download and install pytorch with gpu support we can verify it by running python then import torch and then torch dot cuda dot is underscore available if it prints true it means pi torch is able to detect cuda we are all set now we head to the official yolo v7 repository click on releases and then expand assets find the weights file for pose estimation here it is just download it and save it in our clone repository while the file is being downloaded let's copy one image and a video in the repository to run post estimation later on here i have ac2.mp4 and h0.jpg once the file is downloaded let's run the command python detect dot pi minus minus weights yellow v7 dash w6 dash pose dot pd which is the file we just downloaded then minus minus kpt dash label then minus minus height dash labels then minus minus height dash conf and minus minus source at zero.jpg hit enter and the result is stored at this location let's see here we have the results i do not like this bounding box around the person so we'll take care of that in a moment if lines are too thin you can add another parameter in the command minus minus line dash thickness let's do eight and now if we run it we can see the lines are a little bit bigger now let's run pose estimation on videos we'll use exactly same command only difference is that the source will now be a c2 dot mp4 let's run it and the result is stored at this location here we can see the result it's in game footage so the model is struggling a little bit but for real videos it will give a better performance as we can see in this another video okay now let's add fps to the video output open detect dot pi and scroll way down to the line number 66 over here just before this for loop add a variable called start time and initialize it with zero then scroll down to line 135 right before this view underscore img statement and calculate fps but we only want to do that if the data set mode is not image that means we'll apply this to videos only now we define current time as time dot time then fps equals 1 over current time minus start time and then start time would be equal to current time now we have the fps we can show it on the frame under process so let's do cv2 dot put text on im0 then fps plus string of integer of fps then add x and y axis offsets where fps will be shown then font would be cv2 dot font underscore hashi underscore plane with size two color would be green that is zero two five five and zero and finally thickness is also two that's it now if you will run the inferencing on videos it will show fps let's do line thickness 4 this time there are many other flags to choose from you can see the description of all the flags here at the end of detect dot pi let's add minus minus no save and minus minus view dash img now we can see the inferencing in real time and it will not be saved on the hard drive and here are the results with fps at the top but what to do about these bounding boxes around the persons let's add another flag at the end of the argument just copy this one and paste here change it to no b box and if the flag is set bounding box would not be shown around the person now scroll up where plot underscore one underscore box function is being called it is line 121 in my code and add a parameter no b box equals opt dot no b box this function is being called from utils and then plot dot pi file open that and scroll down to plot underscore one underscore box function this is the line that is drawing a bounding box around the person let's add an argument no b box equals true in the function definition and if this argument is not set only then we want to draw this rectangular bounding box that's it save the file and now we can add another flag minus minus no b box at the end of the command let me change this video to sc2.mp4 and there we have the result if we open it we can see that this looks much cleaner without any bounding box and fps is also shown at the top i will be making an advanced tutorial for yolo v7 in the future for adding functions like extracting and saving an object only detecting a particular class blurring a particular class or tracking a particular class with that i think i am done if you have learned something of value today leave a like and subscribe to the channel consider the support on the patreon to help the channel out i will see you next time [Music]
Info
Channel: TheCodingBug
Views: 18,972
Rating: undefined out of 5
Keywords: yolo v7, yolov7, yolo, YOLOv7, Official YOLOv7, object detection classifier, yolov5, yolov7 tutorial, install yolov7, train yolov7, yolo v7 tutorial, yolo v7 object detection, yolo v7 windows, yolo v7 linux, yolov7 windows, yolov7 linux, yolov7 python, yolo v7 python, yolo7, yolo v7 official, official yolov7, official yolo v7, pose estimation, yolo v7 pose estimation, yolov7 pose estimation, official yolo v7 pose estimation, keypoints detection, yolo v7 keypoints detection
Id: z1UN7TbcRgM
Channel Id: undefined
Length: 8min 38sec (518 seconds)
Published: Fri Aug 26 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.