YOLOv8 in python environment for object detection | VSCode | OpenCV implementation of YOLO

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] thank you hello everyone welcome to this video and it's all about the all new ultralytics yellow V8 module this module is absolutely awesome it's not just accurate it's not just like precise it's really really optimized and is just just like awesome and with a new python environment integration that has been uh added to the yellow modules is really really awesome it allows those who want to have the python environment integration being done easily it saves a lot of stress because with the previous modules uh integrating it in a python environment will require a lot of things to be done though it was possible but you need to do a whole lot of process but with this a new module it has the easy integration like we have down here to have it straight in a python environment working you run have some few run tests haven't done so in this video we're just going to look at how to set it up the python environment and then get your code running with opencv just the opencv way so you have control over how your outputs are coming you choose how to put your result and you don't want to go by the traditional creating the runs folder putting the detect files down there so sometimes you want to have like what is happening per detection in a video or that you have saved or from your camera feed so that's what we're looking at in this video so this is your official GitHub page and um like always they have a collab that leads to that page and how to run it so a quick run you just hit you choose the runtime you test and then you can have this running straight without anything being done so you can just go straight ahead like run run the codes get the output it's just the same things that you see in here that is going to have if you keep running um each cells like I said our concentration for this video is on the python environment how to get it set up fully so without much time being wasted let's hop on to that and we thank ultralytics and the whole team for making this possible is a really really new update on that okay so first um we can have our terminal so it can be any OS you want Windows Mac Linux I'm working with Mac right now so and I'm going to I have like the mini footage Condor to use the Condor for my virtual environment and other stuff but you can just go straight away with your base module if you have python already installed and then you can do your pip uh install the ultralytics line so from the website here we can see where we can back here it shows the installation which is very simple pip install ultralytics and then they have uh straight away some CLI commands and then the editor the pattern environment commands that you can also have and then run so we are concerned about the pattern environment setting it up so if you're on uh yo whatever OS just need to figure out how to get your virtual environment if that's what you're going to use or if you're going to use the base environment to create whatever but I have um virtual environment that I am just activating with the Condor then you just do your pip install Ultra Linux in that virtual environment so if I hit run I already installed that so everything here is done and it it installs all the dependencies that it's going to need from cd2 numpy the pi torch the required version and all of those stuff is going to be installed in the environment okay so with that being done we have our full setup uh other lytics environment for our code to run so I am using vs code for this test it can use any IDE of your choice um pie charm or spider in anaconda vs code or you can even use your the normal python editor you just write the code just as we have it uh here and it should work perfectly the same for you okay so in here um I'll leave the link to the hub for this particular project in the comments sections and also in the video description as well so anyone can access the code in here so we're not gonna like Go Daddy like typing it just like an overview explanation on how to uh have it done so the first thing will be the system check so once you have the ultralytics installed and everything we want to just do uh import ultralytics then ultralytics dot check system so if we run this it should give you an update and information about your computer system so I have a 10 CPU 16 21 gig ram 0.0 gig RAM and then it gives you that disk memory and all that so currently I'm going to run this with on a CPU but um later video we'll look at how we're going to utilize the M1 chip for this code base okay so once you have this running then you know everything is fully set up let's know if you have any issues in the comment section or anything like that then we see how to help ourselves with that as well okay so a basic code this was just straight from your website but um I added some few stuff just for you to like see what is happening so we understand how we're going to integrate this with our opencv uh code so here we import yellow from our ultralytics module that we've imported I'm just importing numpy though we didn't do pip install numpy but with the ultra analytics that we installed it included all the dependencies that the module is going to need to run so hey we're good to go so here if we highlight on the yellow we can see is expecting a weight which in a string and then a string version V8 it can do without this V8 about I see that um it's going to be like maybe future Integrations where you can select the kind of the version that you actually want to use to run like your module which is going to be awesome with this uh I see we are using V8 now so I am uh expecting that maybe future upgrade like yellow v9 or V turn you just come in here the same module just select V10 v9 and then you choose which one you actually want to use yeah so down here we have this and that and we create a new module then from the yellow class then we want to do our detection on the module so we can just do it without but I just wanted to see what is happening on the output here so I just assign it to a variable then we can play along with that variable we'll be using this more in our opencd file so if we have here we just pass our source in here which is the image we want to run the test on and then the confidence you can put the confidence this way in here then here is if you want to save it or not so say it goes true or false so yes I want to save then we are leaving it to be forced now so we just see what happened okay so the V the image we're trying to reference is under inference and then the images the image zero so and the image 0 is here so this is the picture we want to run the test on so running this video um this code here and then the output we are printing it and then we are printing the detection dot outputs of zero dot Pi so the output here is going to be a tensor so we want to see how the tensor comes out and then we want to now convert the tensor to a numpy array so we can use it with the opencv just control it around Okay so running this we see all the number of detections that they were seeing so with this it creates this uh array so this is the tensor that it came out and we can't work with the tensor that well so there are different ways to deal with it so what I found more convenient for me was just to convert it back to an Empire ready then I see it and I know how to work with it so here in that single frame we are having that um for the first object that was being detected we can see this is the bounding box parameters the first four is the bounding box parameters then this one is a confidence value and then the last one is the class ID so the class ID from the Google data set so class 41 0 I know for sure is a person so he's detecting a person and then this is the bounding box to draw around that person but we are not seeing the output that is because we set the save to the force so let's set the save now to be true and see will happen so with this one it's going to now bring in that runs for that naturally is being included here so we have the runs folder that detects and then this is the output that we have it so it draws the bounding box around it and actually just so perfect so very easy to set it up in your python environment with a detection part now let's say we want to pass a video to it if you has a video in here it's going to do the same running process so it's going to go um look through it multiple times multiple times and then if the save is set to true it's going to um uh save the final output after everything everyone is terminated and constantly you're going to have an update of whatever that is being shown down here as your output so let's set our value to true so now we want to have it integrated with opencv so we have like our output run um in our live detection so what we do is we import our numpy we import CV2 import the yellow from the ultralytics and then we import random because we're going to use random a lot to move about with creating some um colors to draw our bounding box because we want to do it ourselves but that is the natural way that opencv uh comes with so we just see on each frame what is happening on each frame that is what we're doing okay so in here what uh I am including here is I created a test file of the Coco data set so if we go under the under the details here we have the cocoa test dot TST so all the classes in there you just put it in the test file like this yeah so what we are doing uh down here down here is we are trying to open the file and then we read the mode and after reading the file we just want to create a list of all the items and then we close the file so a whole list of that item then we can use that in appending the name to the output then after based on the list that we have so the class list that we have from here we want to Generate random um BGR colors which is going to be used by the opencv to draw the boxes around the detections that we are going to have so we created an empty list then we randomize for R for each time or for each element in the list so that is the total length then we want to create a random value between 0 and 255 assign it to R and then same for green blue and then put them append them to the list of colors and then here we import our weight that we are going to use so the weight is lying down here we choose VA and note if you put it in there then the weight is not found it's if you don't add any directory path to it it's going to automatically download it from the web inside your um directory for you and here is where we add a resize um variables for resizing the frames and you know um just more frames helps with optimized Ram so if you reduce the size of the frame before you pass it through the module it helps you to have a very fast process so here then we do the normal CV2 video capture so we can see I have two of them down here this is this first one is using the camera and then this second one is using a video from uh directory so we have two videos that we included down here so this is a video footage from a scene in Africa yeah so that is because I wanted to have like a very disturbed background and see how the new yellow performs with that Okay so doing that after importing the video then we check if um the video is um properly open if not we exit then in our while loop we read the file to a frame we check if it doesn't exist then no frame we break out of our Loop then if it does here is one thing um the module wants to take a single image at a time and I didn't want to do a lot of process there were a lot of ways to work with this but on each frame that is coming we are just outputting that frame we are writing it to our image folder so that's why we have this uh image folder in here the frame so this is going to always be replaced with the new frame that is going to come at each time so we write it then we import it with the same name because we it's going to move through this because the frame is available we save it and right after saving we want to pick it up and then pass it to our module save it and then get our parameters but we don't want to save it because we don't want that folder to come in our runs folder to come in this time so let me just take this side off yeah so we don't want our runs folder to come we want to take control of whatever the output is coming then we want to decide what should happen to our output uh on each time then um the next is like we were doing on the other side we want to convert the tensor array to a numpy array then after we do that we do that we want to check if the length of the detection parameters is not zero so this detection parameters if it's not zero meaning there is a detection then we want to Loop through the detection parameters and then we have the parameters for each all right so if we look at this array here down here we can see that this is a list of the first object that is being detected so we want to look through it so we get all the parameters of each then after getting all the parameters we want to draw a rectangle on the frame then we want to draw it at position the first so the bounding both position this to that and then the second points will take that you pass it as an end even though it's displaying us integer in here it comes as a float when you call it so you just need to change it add to an end also you get some errors and then the color we also want to select the color based on the position because remember we created a list of colors as well so integer the param number five is going to be the position so the number five is going to be the class ID so it picks that color there then constantly anytime that particular class is being identified going to pass that to it then we put a test of the class name to it as well so that is why we created that um test list and if we look in here this is a class list so the class list also takes from the position the five now there's a class uh index then we add a string of We Run The Confidence value to four the The Confident value to three rather sorry because that is at the position number four so we run into three we had a percentage sign then we want to put it as position um zero and then for the x side and then the position one minus uh ten So based on whatever the the first point and then the second point is so we move it up a little above the top line then the the name color we kept it constant but we can also change it to whatever the bounding box line is or anyhow you want to have it shown Yeah so basically that is it then we do CV to the show so on each frame is going to constantly show what is happening then if you hit the queue he is going to terminate our run so running this just to see how it's working so you choose what to have pin on each frame you can go hard on different processes with different opencv processes alongside with um this very yellow V8 module and this is so wonderful I'm really really excited about this new feature and I hope this video helps you a lot to know more about how to integrate it with the environment this code is going to be available on the GitHub page so you can have access to the code copy it duplicate use it and then you can also help build it up with more awesome projects so the next video is going to cover how to work with the training part so how to train your module inside the python environment um just like we're doing for the detection inside the python environment then we're going to also look at custom data set as well so it's performing pretty much well yeah so there is a second video in the so if we check the list of videos that we included in here we have affric zero and Afric one so you just try it on the African zero and you can also try it on your webcam as well so you just change move everything and then put zero so down here this is the second video and it's really detection is really really high having higher accuracy and it's consistent for some time even the noise and other stuff so we're going to leave the video in there try with it and you can also make some pull requests with some project wonderful projects that you work on it okay so thank you and we're hoping to see you in the next video where we test on the train Farm
Info
Channel: Koby_n_Code
Views: 33,252
Rating: undefined out of 5
Keywords:
Id: hg4oVgNq7Do
Channel Id: undefined
Length: 22min 55sec (1375 seconds)
Published: Thu Jan 12 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.