YOLOv8 Course - Real Time Object Detection Web Application using YOLOv8 and Flask - Webcam/IP Camera

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
foreign crash course in this course we will start with installation of python and python then we will go ahead and run yellow V8 on video and live webcam free we will also see how we can train YOLO V8 on custom data set the major part of this course is to integrate yellow V8 will with flask we will dive into details how we can integrate Adobe 8 with fast and test on video and live webcam feed we will also see how we can create a complete front and web app and we will create a complete web app for the personal protective equipment detection project so let's get started in the first step I will just go to python.org and download python so I'll just write python.org and just click on this first link so now if you just click on downloads you can see that python 11 3.11.3 is the latest release but I always recommend to don't download the latest release because it I have some errors and Bug fixes so just click on all releases from here so it's always better to download the uh one down the latest release okay so if we just see the uh here 3. 3.11 is our latest release so just download the python 3.8 version because 3.11 version will have some errors or bug fixes so I always recommend to download one down the latest release okay so if I just go over here so you can see that I will just let three point ten point nine python version from here and then I will just go down from here and select Windows installer 64-bit and it's recommended as well and just click on install so now you can see that it's being installed okay so now just let's just click over it okay so now you can see over here I will just click on up the custom install customize installation from here okay and then I will just click uh documentation I idle python test suit everything and Dot by launcher as well just click over hit and then click on next from here and then just click on install so I've already installed it so I will not install it again okay so now in the next step I will download the pie chart Community Edition so to download python Community Edition I will just write python just forward here and just right by charm free download and then just click on this first link so now I will just download the communities Edition from here we also have the professional version but as for our need a Community Edition is enough and we don't need the professional version so in professional version they offer 30 day trials 30 day free trial but we don't require it and community community Edition is enough to fulfill our need for this tutorial okay so just click on here it will download the dot exe file okay so the download will start automatically okay now you can see that it's downloading so I've already downloaded the python Community Edition so I will just uh click on cancellation of this download because I have already installed the pychop Community Edition okay so in this way you can download the python and download the pychop Community Edition as well okay so next the next step you will just go to any of your directory and create a folder okay so now you can see that I've just gone to my uh local DST directory and I have created a folder by the name yellow V8 crash course okay so in this field folder I will place all my project files and everything over here images videos everything will be placed in this folder okay so now in this in the same way you can go any of your directory and create empty folder by any name like you can create a folder by your name or your lob8 or so I have just created a folder by the name yellow V8 crash course okay so now you can see over here I have opened it by John so I will just click over here file click on new project from here okay so I will just uh now redirect towards the directory where I am just um creating a folder so it's inside d okay so now I'm inside the D directory and here I can see the folder by the name yellow V8 crash course so this is the folder I just created for this uh tutorial so just click over here okay then I'm just creating a new environment you can see over here I'm just creating a new virtual environment over here and this is my base interpreter python.exe okay so just click over here so this is my base interpreter and then click on create new project as I click on create it will ask me projects can either be open in new window or place the project so just click on this window from here okay so now it's creating a new virtual environment you can see over here and this is our directory name your low V8 crash course which you can see over here now we can see that uh the virtual environment is being created so this might take few minutes as it's updated The Interpreter paths and everything okay so now you can see that we have created a new project over here and here we have the brain dot Pi file so if you just click over here just right click over here and then just go to from here sorry just click over here file and then click on from here just click on settings from here and then just click on over here by project yellow V8 crash course is the name of our project and then then select click on python interpreter so now you can see that these three packages are already installed so now you can see that we have downloaded python 3.10 so we can see python 3.10 over here and you'll over here crash course and so just install to install any new package just click for plus and just right here for example if I just want to install numpy I will just type numpy and just select numpy and click on install package okay so it will install the package so in this way you can just uh write all the packages name over here and just install all those packages like you can see over here uh if I just want to install CB2 package CB2 package I will just write opencv Dash python so just write opencv Dash python so you can see that here we have the symbolu dot package so if you just click over here and you can install the CV to opencb python package as well but this process takes very much time like in this way you just need to write every single package name over here in the search bar and install that package there's another another way will be also to install the package so let me just tell you so now you can see that as I just draw the numpy package we can see that numpy package appears over here and now we have the four packages installed in this way you can just click over here plus and install any of the package which you require okay so just going back from here so in order to install the packages of your Joys you just need to click over here file and created file by the name requirements.txt okay so in this file you can write on the packages which you want to install so if I just want to install Ultra latex I will just try Ultra 88.0 0.26 okay so this is the version okay in this way you if you want to install numpy package although I have just already installed you can just write numpy and if you just want to install matplotlip you can just write the name of package over here so now here you can see over here package requirement Alternatives Matlock leap are not satisfied so if you just click on install requirement so now it will just install all these two packages and just click on install from here so now you can see that I just go below down over here you can see that uh installing package so it's installing the alternate text package and after this it will install the mac.lip package although numpy package is already installed Okay so just let's run the main dot Pi file and see what output do we get from here so if I just click on main.pi file and just click on run Main and just open that terminal so you can see that it's giving me the output High Pi job so that's fine in the meanwhile in that alter Bluetooth pack is get installed and let me tell you in this video tutorial I'm I will be using YOLO V8 for the object detection so alternate uh so to implement your the weight we just need to install the ultralytics package we are which I am just installing over here so uh the other way here is the latest release of the YOLO version outperforms all the other Yellow Version in terms of accuracy and speed and the mean average Precision of 56 7.6 have been has been marked with yellow V8 so if I just show you the yellow V8 GitHub repository so let me just show you so if you just click over here and just write your Adobe 8 up okay so and just click on this first link so alternetics has released you either V8 ultralytics has also released your B5 and your V3 so uh you're gonna be eight is the uh the only version of YOLO which has its own package so you just need to install pip install ultrix and you can just run Google V8 so you'll be here is the only version of YOLO which has its own package like you can see over here pip install ultralytics will just install the Euro V8 on your system and you just need to write yellow predict and you can just run yellow V8 okay so private versions of YOLO don't have their own package but you know V8 has its own package which is which is PIP installer quality so using pip installer qualities you can just install your V8 uh you can also if you want to make any change in the detection or training script you can just clone the repo as well okay so here in our case we are just installing the uh foreign package by just writing ultralytics and its version okay in the requirements.txt file so if I just go back over here now you can see over here the in the requirements.txt file I am just installing your lob8 by just writing ultralytics and its version so in this way we can install that YOLO V8 so now we can see that package is successfully installed packages ultralytics and Matlock package has been successfully installed you can see in the note over here below down type the package six installed successfully which includes Alternatives and back.ly package and you can see that we are using python 3.10 which you can see over in that top end up bottom right corner python 3.10 which we are using which we have just downloaded the python 3.10 version from uh python.org okay so now we will just run YOLO B8 so first of all I will just go over here and click on new and create a directory by the name is Randy your low ba and you'll know and here I will just create a file by the name your law uh test dot Pi okay so now you can see that as we have installed the automatics packet so I will just write from alternatives import Journal okay so it will import yellow p8 from uh by using from ultralatics import YOLO it will uh automatically import the yellow V8 okay here the V8 package from Ultra edit so in the next step I have just tried import CB2 or yes let's just keep it and just write model is equal to then we just need to write YOLO and just write uh so in case of in yellow V8 we have different uh weights available so let me just show you so now if I just go below down here so now you can see that we have five different weights yellow V8 and or Nano is the smallest and it is uh it is the smallest uh in size and it sees the fastest as well but it is less accurate while yellow V8 X is the is the more accurate but it is uh less fast or it is slow as compared to other YOLO B8 version so simply yellow V8 and is less accurate but it is the fastest but yellow v8x is more accurate than other ordinal yellow V8 but it is less fast okay so just go back from here and just write your low p8 and Dot PT okay so I'm just writing the name of the weight file over here so the weight will be automatically downloaded okay so just need to we don't need to manually download the weight and just place in this folder the widgets will be automatically downloaded you just need to write the name of the beads uh which you want to uh use just need to write the name of the bits pre-trained weights over here okay and then after this starts right results is equal to model and just pass the uh input image path over here okay so let me just uh okay so let me just add some image over here so I already have some downloaded some images from uh uh Google so I will just add these images over here so let me just uh go back and just let me some add some new images okay so just just give me a minute I'm just going back and just adding some new images okay so let me just show you as well so if I just go over here into this folder so now in the images folder you can see over here I have just placed these four images okay so let me just test out on these images and see what results do we get okay so I'm just going back towards the code and just I will just try it out down so I I will know I just want to go to the images folder and just write the name of the image or the image but I just need to add the image part so I will just write uh dot dot which means go backward like go outside is running Adobe 8 folder and then just go to images folder and just write uh at three dot PNG the name of the image on which I was born to perform the detection okay and that's right show is equal to true okay so now uh just click on over here YOLO dash dot Pi so just click over here and just run YOLO Dash test descript so I'm just running this yellow test dot by Script okay so this might take two seconds now you can see that uh the output just appear and it just closed because we don't have added any delay okay so that's why we cannot see the output it just appeared and went away so because we don't have added we because we uh haven't added any delay so let's add the delay so to add the delay I will just import CB2 import CB2 and I will just write CB2 dot rate key so I'm just adding that delay now and just write 0 over here and now let's run this script again so now hopefully we will have the output in front of us okay previously we can't see the output because we don't have any added any delay so now you can see that we have the output in front of us like you can see that this is the input image I have just passed and here we have the detection results using yellow V8 so now you can see that the model is able to detect motorcycles like you can see that here we have multiple motorcycles Plus in the back side of the image we have the cars so the model is able to protect cards as well as the motorcycle so that detection results are quite fine let's test on some other images as well and see what results do we get okay so just write uh and then name so I'm just passing two dot PNG and see what results do we get so just click one here and just run YOLO Dash test and okay I just forgot to stop the previous uh so I will just click over here and stop the previous running script and just run this again now so run YOLO test okay okay so let's see what results do we get in this case from here okay so let me just share my screen no so now you can see over here like you can see over here we have detected the card this is a wrong detection this is not a bus this is also a card while you can see that uh the yellow V8 model is detecting cars and these are very impressive results currently we are using yodobi 8 Nano but I have already told you that yellow B ax8x is the largest uh among the YOLO V8 model and it is uh more accurate so you can use also use the yellow v8x bits and in that if you've used the Yellow b8x base uh the rejection results will be more backward because uh you know we A10 Nano is although the fastest but it is less accurate when yellow v8x is the less fast but is more accurate okay so the detection is also fine so let me just go back Okay so uh now what we can do is we can test uh on some other images as well but um I have already tested on two images two dot PNG and three dot PNG already so now we will run yellow V8 on webcam and seeing what results do we get so I will just click over here and create a new directory by the name running your low V8 webcam okay so now we will be running YOLO V8 on the webcam so just create a new folder over here YOLO [Music] the end webcam dot pi okay so I'm just created a new file over here yellow V8 webcam dot pi and first of all I will just write just close the previous ones from ultralytics import YOLO okay so we have just imported a yellow V8 version over here then I will write import CB2 then I will just write import math over here uh currently just remove this uh I will just explain this as a we go further on uh so okay then just write cap is equal to CV2 dot video capture and just write as I just want to uh run a YOLO on V8 on my webcam so I will just write 0 over here but if you have multiple webcams or connected video system then you can just check which webcam are you using it can be one or zero in any case then I will just calculate the frame weight over here okay go to get the frame rate just right and app dot get V and just get the frame height just right same height is equal to n cap dot get four so in this way we are just getting the frame rate and the frame height I just want to save the output video with detection I will just write out is equal to CB2 dot video writer and just write the name output dot Avi you this is the name uh the name of my output file like after uh doing the detections uh or after an introduction on video or using webcam the output video will be saved by the name output dot Avi video writer Dash 4 CC and just write m just write M over here and Omar J comma G okay then just pass the frame rate over here and then just pass the frame weight and frame height over here okay so just to find my output video writer function over here using CB2 dot video writer then just click on enter and just write modern is equal to Xolo and if I just go to the yellow V8s for your weights folder over here you can see that here we have the YOLO V8 and our P20 where's placed over here so just write dot dot and just say that go to the word stop uh we just need to get out this out of this running yellow V8 webcam folder and just need to go to the YOLO weights folder so just write your lower dash weirds okay and just write your low V8 and Dot PT over here so here I'm just passing the pre-trained weights path over here and just click on enter over here then just write while true okay success is comma image is equal to cap dot read so we are just uh reading frame by frame okay okay so let's just test this either our webcam is working fine or is there an issue so just write CV2 dot I am sure image for my image over here and just write CB2 dot weight key over here and just click over here one and I think that's fine or just further elaborative cb2.weight Keys one and order is equal to one then okay so that's completely fine so let me just run it and see if I am able to access the webcam or not and if I'm able to save the output video or not so let's just click over here and click on run your V8 webcam okay so um let me just I think it might take few seconds for it to appear so let's just wait okay I don't think any anode okay let's just just navigate my screen over here just give me a minute so now you can see over here uh the webcam is vertified and now you can see on my image on my clear screens okay so I think the webcam is working fine let's see if this output video has been saved into the directory or not okay so let me just go back over here okay so now you can see that here we have the output uh file has been created over here but let me just uh go back from here and just let me just pause stop this process by just clicking over here okay and let me just see if the output video uh is working or is there any issue so just click over here so now we can see that our output video has been saved okay so now you can see that uh as when we run the webcam our video has been saved so that's fine but that's perfectly fine okay so let me just go back towards the board so now uh now I will just add the the and now using yellow V8 model which I've just called over here and just saved in the variable model uh now I will just run the detections on the live web feed so let me just add that code over here and run detection on the live webcam feed as well okay well here I have just written the complete code so let me just explain another complete code over here so now here you can see that uh we are just doing the detections uh using yellow V8 monitor one frame by frame so here we have status stream is equal to true so stream is equal to True we'll lose the generator and it is more efficient than that then if we just don't write stream is equal to True okay so we are doing detection using yellow V8 frame by frame and our result us are saved in this variable results so once we have the results we can check for individual bonding boxes and see how does it performs so here you can see here once we have the results we can check for the individual boundary boxes and see how well it performs so uh we will just look through each of the burning boxes and for each of the result and see how however detections are performing okay so now you can see that here we are just looking through each of the individual bounding boxes you can see over here so here we have the X1 a y one coordinate of the bonding box and here we have the X2 and Y2 coordinate of the bounding box so let me just show you uh how it is basically it works so let me just open the paint over here oh okay just give me a minute I'm just opening okay so for example if we have this bounding box okay so you can see here uh we have the four coordinates for each of the bounding box let me just write over here so this is the X1 y one coordinate and this is the X2 and Y2 coordinate okay so for each of the bonding box we have the X1 y1 coordinate and this is the X2 Y2 coordinate so X1 y1 is the like we can see that the top left corner coordinate and this X2 Y2 is the bottom right corner coordinate okay so if you just go back towards the code again okay so now you can see here for each of the bonding box we have the X1 y1 and X2 Y2 coordinates okay that's perfectly fine uh if I just go over here so now you can see that we are just converting this coordinates values to integers why we just need to convert this coordinate value to integers because the output which we are getting over here is in the form of tensors we need to convert this output into integers so let me just show you what kind of output we are getting and after converting into integers what kind of output do we get so just run this from here and just let me just show you the output which we are getting the the webcam is just currently on so uh it might take few seconds so just uh make the webcam working so okay so now you can see over here we have the output in the form of tensors uh but to further processes and create bounding boxes around the detected object we need to convert this output from tensors into integer so now we will just convert this output from the tensors into integers so to convert this output from tensors into integers so just comment this out from here and so just you need to write int X1 y1 so we are just converting each of the coordinate from 10 side into integer so now just run this again and see what output do actually we get so now basically our output is being converted from tensors into integer over here so we will have the output very soon forward here okay so now you can see over here now we have the output in the form of integers though we can't see the tensors over here you can see that we have the output in the form of now integer so let me just pause this from here so now you can see that previously we have the tensors around each of the coordinate but now we don't have the tensors around each of the coordinate we have the output in the form of integer four zero x one y one coordinate value this is the X1 coordinate value this is the Y one coordinate value which is the X2 coordinate value and this is the Y2 coordinate value okay so now we just need to create a rectangle over here so we're using uh when you can say that we need to create a bounding box around each of the detected objects so basically we have the coordinate value for each of the detected object like we have the coordinates value for each of the detected object we just need to create a rectangle around each of the detected object so using cb2. rectangle we will create a rectangle around each of the detected objects okay so if I just go over here and show you what are the parameters we have in cbq rot rectangle okay so let me just open this from here so okay okay so now you can see on here in the cb2. rectangle we just pass the our image over the current frame okay and then we have the Stacked starting point and then we have the ending point and the color of the rectangle and then we have the thickness okay so now you can see over here if you just see this so this is our starting point you can see over here this is the ending point oh and people just Define the color and the thickness of this bounding box so if I just go back where's the code so now you can see over here uh we have this is the current frame and this is our starting point this is our ending point and this is the color of the bonding box it's around pink and this is the thickness of the bounding box okay so let me just comment this out currently let me just display the bounding boxes first and then we will discuss the rest Okay so I'm just currently focused on displaying the bonding works I am not interested in displaying anything else okay so uh let me just uh display the bounding boxes and see what results to actually we get okay so let me just uh I just checking few things over here if there is an issue so I can just protect it so let me just comment this out friendly as well okay so let me just run this and see what output we get are we able to draw boundary boxes around detected object or Not So currently our goal is to draw bounding boxes around each of the detecting object and then we will go ahead and see further work if so now you can see that a bounding box is being drawn around the as I am a person so accounting boxes are down around B so that's perfectly fine uh that's what we are expecting okay you can see over here a bonding box is being drawn around me so this is the output which we expect basically uh that bounding box should be drawn around each of the redirected object okay so this is our expectation but now we just want to have the label and the confidence score as well like label means anytime person there should be shown that I am a person labeled above the bounding box there should be a label that I am percent and The Confidence Code like how much our yellow V8 model is confident that I am a person so I have already told you as our yellow V8 model uh we are just using the yellow V8 pre-trained model which is trained on Coco data set so the Coco data set consists of 80 different classes so I have just written the name of all those classes over here these are all the classes which we have in the Coco data set which are around 80 different classes okay so to get the confidence score over here I am just passing this using bad dot CL we are just getting the confidence for values uh the confidence score basically appears in the form of tensors so okay so to just convert into integer I'm just using uh this mat.c let me show you what originally we have the confidence board value so just write print box dot font fill fence okay and just write zero and let's see what does the value conference score value do we get so then we will discuss later few things later so just comment this outline I just don't want to show the output currently I just want to show you the confidence score value so just click on run your V8 webcam so now we'll have the confidence score value over here as well so meaning this might take few seconds so now you can see here okay so just let me just stop this so now you can see that we have the confidence support value in the form of tensors we just need to convert this value into integers so I'm just converting this value into integers over here okay plus uh now you can see over here we have the confidence for Value into it we are just converting the confidence score value into integers and now using in box.cls0 we are just getting the class ID okay so here CLS will give us the class ID and here I've just written the class name so if the class ID is 0 it means it is a person if the class ID is one it means it is a bicycle and if the class ID is 2 it means it is a car okay so here I'm just getting the class ID and you hear that I'm just finding the class name and in the label I am just putting the class name and confidence value combine and here I'm just finding using CB2 dot get text size function and finding the size of this label so I can just create a rectangle field rectangle as per the size of this label and here I'm just creating a rectangle above the bounding box so that I am just can just put text around it okay okay and just I'm just saving the by detections in the out in the file name as output.avi in this output.avf file my detections will be saved and just let me just show you the detections frame by frame live and if I just click on one the process will stop and just clicking or total release so let's just run this and see what results to actually we get and is our output saved or not output detection video saved or not so just run you will be in it webcam and see what results are we are getting now okay so this might take few seconds more well now you can see that everyone here uh unable person as well plus you can see over here the confidence board over here as well Plus on the back side I have the sofa so you can see that uh slope over here as well plus you can see that a bounding box is created around uh B and I'm a person so you can see the person over here Plus on the back side I can see the sofa and you can see the label so power here as well okay so let me just a little more on this you can see that uh so far so the Texans are very well you can see over here so these are the results which we expect and this is all from this let me just show you is if our output videos saved or not so I'm just stopping this video and let me show you if our output video is saved or not so let me just score yellow V8 okay so let me just put what's the yellow V8 dashboards run your V8 webcam and so you can see over here uh here we have the output video so if I just click over here okay so now you can see that uh there's the results of an impressive our output is being saved you can see over here Okay so you can see that we are able to do directions on the live webcam and our output video is saved here as well so that's perfectly fine that's the results which I'm expecting and that's pretty impressive so in the next video tutorial we will run Adobe 8 on Windows and see what results do we get so now we will see how we can run yellow V8 on windows so first of all I will just click over here click on new and create a new directory and just write running your low V8 Windows okay so I've just created a new directory by the name running yellow V8 Windows which you can see over here and I've just click over here click on new and create a file you know we edit Dash video.55 Okay so I will just copy the code which I have in the Euro V8 webcam dot my file and just paste this code into the yellow V8 Dash uh video dot Pi file so what change I will make before explaining that change what I will do is uh I will just go to uh over here and let me show you a video sample video so I have just created a folder over here by the name videos and if I just open this folder you can see a video bikes dot MP4 so now I will just perform reduction on this video and I will see whether my able my model is able to detect the person and the bicycle over here or not so what I expect is that my model should be able to detect the person and the bicycles because I am using the other V8 pre-trained weights and the pre-trained weight pre-trained model is being trained on the Coco data set which can detect 80 classes and the class and among those classes we have the bicycle and the person class as well okay so just close this out and uh in the cb2.video capture zero just passed up this part so I will just write dot dot it means we just need to go outside this folder running your W8 Windows uh or running the other unit uh videos I think this should print the name show of this folder should be yellow V8 videos but now I've just created it so we just need to go back to this folder from this folder and just go to the videos folder just write videos and in the videos I will just like uh by start MP4 okay so bikes dot MP4 so just select this out from here by just writing Dash okay okay so I think that would work perfectly fine okay so I'm just using the bike start MP4 and redirecting to watch this directory okay so let's just run this and see what output do we get so just click on run yellow V8 video ball I'm just running this script file and let's see what output we get if there is any error we will definitely fix it out so now you can see over here we are able to check the bicycle as well as we are able to detect the person we have the confidence score along with this as well so the detection results are quite impressive like we are able to detect the person we are able to do like the bicycle via assign the label bicycle we have assigned the label person and the confidence score we also have over here okay so that's perfect like you can see over here we are also able to take the traffic light over here as well we have created a bounding box around it as well and the results look quite impressive in short you can say that this is our desserts what I was expecting and animation so let me just stop this video and see if my output video is saved output action video is saved into this folder or not so I will just go over here run your W8 windows and this is the output.api file which contains the output video with detections okay so now you can see over here uh we are able to do the detections uh like you can see that we have created a bounding box around the bicycle we have also created a bounding box around the person as well uh like you can see over here we are able to do the textures thus we have also assigned a confidence score and the label as well so the results are pretty impressive okay so now until now we have done detection on images videos and Northern Light webcam feed as well so we have covered half of the part like we have Windows from the start we learned how we can install python how can install the pipe drop Community Edition or we can run your other V8 on images we have also seen that how we can run the other v81 videos and how we can run away it on webcam as well so in the next part we will integrate yellow V8 with class okay so let's see you on the next part foreign with class so this is all the code which I have already written and explained you previously but one change which one change which I have made is that I have just created a function video detection and added all this code inside this function video detection okay and in the end of this function or the output of this function is the output image over the frame with the bounding boxes around the detected object with labels and The Confidence Code so this is uh that you can see here in the unit we have the image over the current frame with output uh bonding or with output bounding boxes around uh detected object with the labels and The Confidence Code so now plus I have just created a file by the name last app dot Pi file so let's start from the start so as we are integrating the other V8 with plus so in the first step you need to make sure that you have installed class on your system so if you have not installed plaspoon in your system just install it by just writing pip install okay so this will install flask on your system because uh we are creating a web app using class okay and we are just integrating user V8 with class to create a web app so as I've already installed the fluff so you can see over here the requirements already satisfied but you have if you haven't installed class already it will take some time to install okay send then we are just importing from flask import flask render Dash template so why we need to import Android template is that we just we need a rendered as template to render our HTML file but we are not using this currently over here so you can just remove this from here like this session request jsonify response you can remove this standard Dash template currently because we are not using it plus you can also use this session because we are not saving the video file but the session store it we will do this later on as you can see simply remove just oneify as well uh we use the Sony 5 and we want to convert our output like frame rate or uh the total number of protections into Json format to display another our HTML page then if I go ahead I am just importing CB2 so CB2 library is required if you want to run the yellow V8 model send I am just uh from the YOLO V video dot Pi file from this file I'm just importing the video Dash detection function so now you can see here in this hero Dash video dot Pi file I have this video Dash detection function so I'm just importing this function and here I'm just initializing class you can see here and here I'm just creating a secret key which I have written by name you can just change it as well so now here we have the generate frames function so generate frame function take path of input video file and gives the output uh with bounding boxes around detected object so here we are just creating a function by the name generate frame so this in the power x we will just pass the input video file path and it will give us the output with bounding boxes labels and the confidence score around each of the detected objects okay so now here I'm just calling the video detection function and so now you can see here here we have the video detection function and in the video Direction path function we just pass the path of the input video file okay so here I'm just calling the video detection function so the video detection function will give us the out in the output the audio detection function will give us the bounding boxes around the detected object in labels and the confidence board okay so as we are doing detection frame by frame okay so now you can see here uh we are just uh encoding the detection so basically any n Plus application requires the imported image to be converted into byte so we are just importing the current frame of the video so basically any flash application require the imported image to be converted into bytes so so here you can see over here we are just converting the encoded image like you can see here or the imported frame into bytes you can see over here using dot two bytes I'm just converting the encoded image or the frame into bytes okay so now uh we will load over on the individual frames and display them as video uh so now we will look it so basically our video is divided into multiple frames when we run object detection on the video the complete video is divided into multiple frames and we do the detection on each of the frame one by one okay so after doing detections on each of the frame so you can see here we will look over all inputual frame and display them each video so after doing rejection on each of the frame one by one now basically we want to display our output as a video so when we want the individual frames to be replaced by subsequent frames like in the form of video the content type what mini type will be used so now you can see here we are just using the content type because we just want to replace the individual frames detection app by the subsequent frames okay and uh so you can see over here uh here we are just using the delete keyboard which you can see over here so using keyboard uh we will uh after using GL keyboard in the video detection function output we will get the individual frames with bounding boxes around the detected object with along with the label and The Confidence Code okay so this is on and so again just explaining the generate frame function in the generate frame function we just pass the input of a path of our input video file okay and here I'm just calling the video detection function so video detection function will give us the output with bounding boxes around detected object and here we are just encoding our frame and so basically uh any classification required line quoted in which to be converted into bytes so now we can see over here we are just converting the imported image into Pi 16.2 bytes and basically we will Lobo on the individual frames and do the detection on each of the frame as one by one and then we will display the in visual frame by the subsequent frame using content type or mini type okay so this is all and then I'm just calling this uh using app. method I am just creating a URL Dash video so using App dot root method I am just creating a URL Dash video so in the dash video I'm just creating uh passing generate frames uh calling this generate frame function in the uh response I am just recalling the dash uh video URL I'm just calling the generate frame function and in the part Dash X I will just pass the path of my input video file so just remove this here you can see over here you can see that here I've just passed the path of my input video file so currently I'm just going just to a folder of last tutorial you want to be here so I've just written dot dot so just go out of this folder get out of this folder and just go to the videos folder and in the videos folder I have the bikes dot MP4 so I'm just going to the videos folder and here I have the bike start MP4 so as I already told you to uh when it want to into the individual frames to be replaced by subsequent frames either use minitax or content type okay so here I'm just using mine type multipart to replace the sub in visual frames by the subsequent frames and here just passed the path of my input video file and just now run this class cap dot Pi file and see if we are able to integrate your flask with yellow B8 and all our results look like when we just follow the URL Dash video so just just need to go to the terminal and if you just click over here and just click on copy path and just write CD like you want to set this path as our grant directory okay and just go look over here copy path and just copy this absolute path okay and just write CD and just add this path now just write Python and just run this last app.pi flask app dot Pi just click on enter now Okay so now basically we are integrating the Adobe 88 with class and currently I am just running on a video basically so just click click on this URL so basically this is not our URL basically we have printed URL Dash video okay so we have not created a root page yet we have just written Dash video so just click over here Dash video and just click on enter so now you can see over here uh we are just doing the reductions on our import video which I passed okay so now you can see that we have the boundary boxes around the detected object like around the person we have the warning box uh and we have the label person and the confidence score around the bicycle we have the bounding box and we have the label bicycle and the confidence score over here as well and uh on the traffic light we have the bounding box around the traffic light we have the label and we have also have the confidence poor over here as well so the results to impressive so now you can see over here we have integrated whether we had with class and now when we call I call this URL Dash video we do the detections on the input video which I have passed over here so this is the input video which I have passed and let me just show you this video so if I just go to uh my folder your Adobe 8 crash course okay so if I just go to video so I'm just passing this video as my input so and here detection is done on screen by frame so now you can see that I am just using this video uh to do the reductions okay and you can see over here uh I've just passed this video as my input video file in the flask app.file and we can see that we are able to do the detections uh of bicycle person and your traffic light so you can pause any video part of which you want over here and you can do the detections on it as well so now we have done detections uh on a video now in the next video tutorial or in the next part we will see how we can do detections on a live webcam feed so uh so we can create a Fly Fast API so that we can do detections on the live webcam feed so let's discuss in the next part foreign with flask and run on a video so now we will see that how we can run on a live webcam feed so to run on the live webcam feed just reply change in the video path in the path Dash X to Zero so I will just end this out and in the in this uh if I just uncommented so now you can see over here in the path Dash X I have just entered 0 and I have just uh removed this part of the video path I will have just removed the video path from here and just replace it with zero so by replacing this video path is 0 we will be able to run uh this on the live webcam feed so in the previous video we have already seen that how we can integrate your V8 with class so by just replacing the video path to 0 we will be able to run this on the like webcam feed so uh just write python just for now just run this script python class Gap dot Pi file and uh now we will be able to run this on the light webcam feed so this might take few seconds as well so let me just click on this one part okay uh before I just run this let me just stop this from here and instead of doing this let me just vent this out and unwind this let's create a newer URL to run this on um live webcam feed so just copy this from here and just write here webcam and just replace this path with zero so if we just replace this path with 0 we will be able to run this on live webcam feed so I've just replaced the video part with zero so instead of just accommodating into the video URL I have just created another URL by the name webcam so here using App dot root matter I've just created a URL webcam so and just in the path to X I have just removed this video path and edit 0 so that I will be able to run this on the live webcam feed so just run this now so now when I will just call this URL Dash webcam I will be able to run this only like webcam feed and do the detections so what is the error okay so we just need to change this function we cannot have two functions by the same name like video open so now just run this again so now we are just running the class app.pi file so now we see that if we are able to do the uh detections on the live webcam feed or not so just copy this URL from here copy URL and just go over here and just write this and dash webcam and see if we are able to detect two detections on the live webcam feed or not so let's see how does it work so I'm just also waiting with you so now you can see over here we are able to do the directions on the live webcam feed okay so you can see over here uh you can see that uh you know bounding box around B and the label person with the confidence score here as well it's detecting as the sofa okay and here I have the tab is able to detect a tab or not let's see so now you can see over here it's detecting the tablet as the laptop although it's a tender so that's okay as well so now you can see over here or it's working quite fine on the live webcam feed as well we can consider on the bounding box around me and uh paper as well so present side word okay so now you can see over here uh we are able to run this on the live webcam feed as well and the results are quite impressive so if you are smart to run on video so just need to go over here and or just refresh this uh the URL we have already added so I will just refresh this Dash video URL and see if you are able to do the sections on the video or not so now if I just call this Dash video URL you can see over here we are able to do the Texans on the video so you can see over here we have drawn the boundary box around the person with the label and the confidence score and we have drawn the bounding box around the bicycle with the label bicycle and the confidence board over here as well so that's quite impressive we have also detected address with light we have drawn the bounding box and here we have a labeled traffic light so you know if you see over here we are here we are getting that car and we have a label guard and we have the confidence power over here as well and here we are just letting the person and we have the confidence score and here we are also selecting the motorbike we have the label motorbike and the confidence score as well so now we have seen that how we can integrate Adobe 8 with flask and do the reductions on the video or night webcam feed as well so we have integrated the other we get with class and what I I was able to do the directions on video and then like webcam feed so now in the next video we will see that how we can create an HTML page and make this more relaxed but we can Okay can further polish this and create a complete HTML web page and do the styling using CSS so see you on the next part so till now we have seen that how we can integrate yellow V8 with flask so now we will click create a completed web app so for the front-end design I will use HTML and CSS and for the backend I will be using flask so let's get started I have already done the complete code so I will explain you step by step the complete code but but before uh we go towards the port let me show you how our web app will look like so if I just go to template so our web app will consist of three different pages one is the home page and the other two pages let me show you so if I just click on index project so this is our root page or home page so let me just open this so now you can see over here this is our uh our web app homepage will look like so this is the home page of our web app so you can see over here uh we have uh this is our header and here you can see that I have added some sample results from by different projects okay and here we have the content page and here we have the footer so if you just click over here it will redirect towards my YouTube channel okay so this is how it all looks like so in the home page you can see that we have a video and live webcam so this is my home page this is the web page or home page if I just click on video it will redirect me to the video page where I can just upload the video and uh run uh yellow VA detections on that video and if I just click on live webcam it will redirect me to the live webcam page where my webcam will automatically turn on and I can do the detection using live webcam or it will do the reductions on the live webcam feed okay so this is the home page and let me show you the other two pages as well so when the users click on the video tab in the home page like the user click on this video tab it will redirect it to this page which is over here so here the user will upload the video and then click on submit and in here in this screen you will see that live detections or detections on that input video okay oh okay and if the users go to the uh here and just click on live webcam okay so it will redirect him to the live webcam feed page so it will be the user will be redirected to this UI HTML page so here our webcam will be automatically turned on and you can see the live detections over here okay so this is how it all works so we have created three web HTML pages one is this home page and if the user click if your user wants to test his uw8 model or any custom model on any video the user will click over here and even with redirected uh to this uh with HTML page and here the user will upload a video and then click on submit and he can see the detections on that input video over in this screen over here and if the users want to do detections on the light webcam feed we will click over here and the user is very directly towards this uh live cam webcam feed page and the users if you are just testing on the live webcam or if you are using laptop uh your laptop live webcam webcam will be turned on and you can do the detections on the live webcam feed okay so these are number three HTML pages and this is our home page this is our night webcam feed page and this is our the user wants to test on some video this uh in this video at this page HTML page we will upload a video and do the detections on that on that I'm input video okay so this is the flask app dot pile file over here if you just you can see over here and these are our three HTML pages so let's just focus on class cap dot pi and then we will discuss these uh HTML Pages later on okay so now you can see over here the from here you can see that I'm just importing class so just remember if you have installed class previously you need to install class by just writing pip install class over here so you just need to write paper install class if you haven't installed flask already okay so I have already installed the flux so you can see that requirements are already satisfied but if you haven't installed it it will take some time then you can see that we are importing class form uh plus pump is basically required to receive the input from the user Okay so when uploading a video file to our object detection model so where we are using class form so if you just go over here if you just code all over a video page okay so now you can see here here the user will upload a video okay and then we'll click submit so to allow the user to upload a video uh we are using flash form okay so flatform is the floss form it is required to receive input from the user to receive the input from the user in the form of video file dot MP4 Prime we required the for class form okay so that user can upload a video okay so then we are just uh using submit field string field integer range field so why we are I'm just using integer range field so that we use the integer uh the instrument let me discuss this integer Edge Point later on uh because later we'll be using it plus you can see here from WT forms dot validators import input required so here we are just using battery meters so to validate that the user upload the video file and it is in the correct format so for example uh like you can see over here here the user when the user uploads up by video file over here we want to make sure that this is the mp4.avi file uh so okay it should be a video file it should not be an image or some PDF document to make sure that a user uploads a video file in MP4 or dot Avi format we require validators to make sure that the user uploads the video file and it is in the correct format so make sure that the user uploads the video file and it is in the correct format we require the validator so that user uploader dot MP4 or dot avi file it should the user should not upload a JPG or PNG file in which file or a PDF document or some other Excel sheet its sheet the user should upload a video file or it should be in the MP4 dot avi file to make sure we require the validators and to run the yellow V8 model we are just importing CB tool and if I just go over here new other Dash video dot pi and here we have the video detection function so we are just reporting this video detection function from the user Dash video dot pi and here we are just initializing fast this is the requirement of class and here I'm just defining a secret key number one you can Define The Secret Key by any name and there here I'm just defining the static files so basically when the users upload a video file over here when the user upload an input video file over here and then click on submit so that that the detection can happen on this video so then the user upload the input video file over here that input file will be saved in the static files folder okay so whenever the user upload of any file video file that video file will be saved in that static files folder so any input file will be in any input video file will be saved in the static files folder so now as I told you that we use the flash form to receive the input video input from the user so use class form to get input video file okay we don't need the call to get the input video file from user okay so we are using flash form to get input video file from the user and we have created a class upload 5 file form so we have store the uploaded video file path in the file feed in the variable file so yeah you can see over here in the file field in this variable file we store the so if the user upload any input video file the file path of the input video file will be stored in this file variable so the file path of that input video file will be stored in this file variable and here you can see that we are just calling the validators to make sure that the user upload a input video file and it is in the correct format okay so make sure that user inputs the video in the valid firmware and user does not upload the video when end user does upload the video when prompted to do so okay so just remove this we don't need it and then we have the submit button so that a user can just submit the video file and we can do the detections on that input video file okay so now here again uh we have the generate frames function okay so generate function is same like you can see that we are just calling the video detection function which we have in the YOLO Dash video file so the video reduction function will give us a Direction with the detection we have the output bounding boxes uh we have the bounding boxes around detected object so in detection so again just show you in the video detection function will give us the output image uh which contains the bounding boxes around the detected objects with label and confidence scores so uh in this detection Dash we have the uh we have the bounding boxes around detected objects with label and the confidence so now we are just convert encoding the current frame and so as specific class requires the input image to be converted in the form of bytes so we are just converting the image or the current frames to be converted into bytes using dot two bytes we are just converting the image for the current frame to our two whites because it is a requirement of loss that the input image or frame should be converted into bytes okay to convert the individual frames into a subsequent frame so that it looks like a video okay and then we have the generate frame web so we can just call this function when we just want to access the web uh cam uh or we just want to run the directions on the webcam we just call this function generate friends web okay and then uh you can see over here we are using app.root method I am just creating a URL Dash so what does it reflect that uh so whenever this URL is called we will redirect it towards the home page okay so if for example app dot dot or Dash home will be called we will be redirected towards the home page of our web app okay and now you can see here we are just doing the session dot clear so what does session dot clear does is that uh we have just uh so if you have some uh if you have tested on some input video you have run protections on a video input video so we are just clearing the session okay so that uh whenever we want to test on some other video the previous input video which we have passed that direction should not happen on the previous input video so we are just clearing the session storage and removing the input video file bars from the session storage because we uh when we upload a new video or rerun the web app uh web app the detection should happen on the new video which we have uploaded not on the previous video okay so now uh we had if you were in the users now here we are using app.root method we are creating another URL Dash webcam okay so whenever just URL user just uh go on this Dash webcam or URL uh the live webcam feed uh will will be on and user can do the directions on the live webcam feed okay so use app.org method to render the webcam at Dash webcam okay and here we are just doing the session dot clear as well so if we I have done detections on the live webcam feed and I just uh refresh the page uh so I can do detections on the new live live webcam feed as well okay so now using app.root method I am just creating a front page or you can see that uh the page on which client can do detections on the input video okay so basically uh the front page which I'm referring is this this is basically my front page uh okay so the page uh the HTML page on which we can do detections on the video feed on import video okay so okay so now you can see here I am just calling the upload file form which I have created over here upload file form so I'm just calling this over here so create an instance for the upload file form okay for our uploaded video file so whenever uh we upload using class form when we upload a video file so as I told you uh we use Flash form to receive get input video file from the user okay so our video file path will be saved into this file variable okay so our input video file path will be saved in this file variable and here you can see that we are using session story to save your video file path so in the session storage we basically save the video file path okay so now you can see here uh we are just cleaning the session dot clear so we are just clearing the session storage using session dot clear so that our previous input video file path is not saved and when we uh upload a new video the detection should happen in the new video or if we rerun the application or open the application again over the detection should not be happening on the previous videos okay so this is the reason we are just creating the session storage because we haven't uploaded or we have saved the video file path in the session storage okay okay so as you can see that we have saved the uh the video part in the session storage so I'm just calling that video path and doing the detections on that video okay and here we have the web app so basically here I'm just calling a URL web app so to do the reductions on the live webcam feed okay so that's all what we are doing over here okay so this is the web app and everything so this is our flask app dot Pi file for this project for this part of the tutorial okay so I hope you have understood uh a code over here so if I just go over here and explain the dot HTML files as well so this is our basic index project dot HTML is our uh home page HTML file okay so now you can see over here uh we had dog type HTML so it means that we are using HTML and here I just Define a language English over here okay and here I've just set the title of my page so if I just show you okay so you can see that this is our home page and you can see the title object detection which we have passed over here okay and now you can see over here uh I'm just setting the body like the what would be the font family of the body the margin and the padding here I'm just setting the Barger padding and font family of the body and here I'm just defining the header background color of the header is black so let me show you so now you can see that this is my header and you can see that the background color of the header is black okay so this is the header so here just like the background color of the Hydra is black and the color of the text has been set as white so if I just show you so now you can see that the color of the text is white over here the color of the text color is white okay and the height of the header is 120 PX and text Dash alignment like the text will be aligned towards the right side so now you can see here uh our text is on that right side okay so uh and the height of the height is 120 PX we have just set it and if I just show you we have set the padding or the margins among uh these things uh like the margins over here for the padding is we have satisfied PX over here okay so now we are just created the second header so what is so basically uh you can see the image over here this is basically our first pattern and this is our second header all this is our second header okay so the background color of the height second hand it is blue so now currently you can see a background image of the food plot layer over here so beside this background image we have the Hydra color as blue okay so we have this behind this image we have the Hydra color as blue which we have set over here and here I've just set the margin top as minus 30 PX and you can see over here I have just set the background image as the set the background image or the path of the background image try pass is one dot PNG so let me just show you this uh background image over here so if I just go to images so now you can see that uh this is the image uh one dot PNG image which I am using for the background okay so if I just show you over here so now you can see that I have just passed the path of the one dot PNG file and this is the one dot PNG file and you can see here the you know you can see that this is appearing as the background image because I have set this over here okay and now I'm just setting the background size and I have other things and here I'm just setting the uh list index so I'll install X okay so display inline Dash block so here I'm just defining the display as inline Dash block so as you can see that whenever we Define this we appear in the form of like first element we have here the second element of the list we have here and the third element of the list will be here but if we set display inline Dash block they appear in this in a single line The Blob format which you can see over here so here I am using the list index and inline dashbox means that home video live webcam will appearing in the line like enable not be appearing uh after one two three like there will appearing in the complete line over here okay and here I just set the Border radius as 10 PX so if I just don't set the Border radius so using border radius we are just creating a rounded rectangle you can see now we don't have the rounded rectangle over here okay so if I just set this again so now you can see that we will have the rounded rectangle over here so using border radius we are just creating this rounded rectangle over here okay and here just find the batting mod paddling and margin the difference between each of the blocks and now you can see that uh whenever we hover it we get the background color as right now you can see here uh whenever I hover any of the water so we get the background color as red so I have just set here the background to the rest red which you can see over here Okay so okay so okay that's fine so now here we have to set the Hydra Ray color why text decoration has done and width at 100 okay so these are just styling so here we are just running The Styling using CSS so here you can see the style tag and here this is all the CSS this is not HTML this is on CSS so I'm just doing styling using CSS okay yes yes all right so all the standing is done here using CSS you can just check it out okay so now you can see over here uh here I'm just defining my free HD uh a whole video and live webcam so whenever the video user will click on the video button it will even be directly towards the front page which is the video feed page okay so if I just go to flask app.pi so now you can see that Dash 20 Page will be called and in this Dash front page Vara in the dash one page URL we are rendering the video project new DOT HTML page so video project new DOT HTML is my and click on submit and the detections happens on the video so so now you can see that uh when we call the dash front page URL where the user will be our video project not a new DOT HTML page will be rendered okay so if I just go to over here index project new so now you can see here whenever the user click on the video button the front page will be called and you have the front page URL called and the user will be rendered towards the video project new DOT HTML page which is the page where the user upload the video and the detections happen on the video and when the dash webcam URL will be called the user like you can see here the dash webcam URL will be called the user will be directed to the ui.html page which is our uh which is our page on which we do the detections on the live webcam feed so if I just show you again let me just show from here so now if I just click over here so this is a UI dot HTML on which we do detections on the live webcam feed okay so if I just go here okay and here we just should write a sample results we just don't need it currently right okay so now you can see here if I just go further down here you can see that we are just depending sample results so if I just go to home page now now you can see here we have the sample results so here I'm just defining sample results and now here you can see that I'm just playing three images sample image sample results images One PNG two PNG three PNG so you can see that here we have the three images one png2png three PNG so these are the images which I am just just laying in the sample results now you can see over here this is my one PNG this is the blue PNG this is the three PNG which I'm just showing in the sample results okay so now you can see that they have passed the one PNG 2 PNG and three PNG over here okay so now here you can see that here I'm just defining the Applause and just writing some text over here so now you can see what here here I've just written some text so I have written all this text over here in this okay part and you can see that here I'm just passing the one dot PNG image to display this image over here which you can see over here to display this image okay and here you can see that I'm just creating a contact page over here so this is the contact page which I am creating you can see over here so this is the contact page which I am creating uh over here in HTML and here just to find the footer so then the user click on this URL he will be redirected towards my YouTube channel so you can see that when the user users record this URL it will be redirected towards my YouTube channel which you can see over here so this is all from the home page which is our indexproject new DOT HTML page in the similar way I have just created a video project new DOT HTML contains the page HTML page on which you do directions on the video so this is the same CSS which we have used and the same thing we have done all over here as well in the same way we have the ui.html page on which we do detections on the live webcam feed and the same style we have used uh in the style tag we are just doing this CS over here so for styling we use CSS okay so in the style tag I have just defined all the CSS you can see over here and you can see we are doing directions on the live webcam feed okay so now let's run this uh video and see uh test over uh now let's run this class cap dot pi and root attractions on the video and live webcam feed so I'm just run this plus cap dot Wi-Fi okay so just click over here copy path and just go over here and scan directory and set to just click over here copy path and just click this and CD means we need to set the current directory and just enter this path and now we will run the python class cap dot Pi file and we will do detections on the on the video I know as well as the live webcam feed as well so just run this class cap dot Pi file and let's do the sections on the video as well as on the live webcam feed so just click over this URL so now we will be redirected towards our home page thus remove this previous spoken URL all this so now you can see that we are just redirected to our home page okay so this is our home page if I just click on this home page uh I'll just click on this home okay so it's giving me error let me just check it okay so index project dot HTML so this is wrong we need to redirect it towards our home page so in our tools redirect towards our home page we'll just forward here okay so test add dash warmer okay so just add Dash BOOM over here and click on Save now just stop this Ctrl C and yeah why it's not stopping why okay stop now run again the flash cap dot Pi so I think it will work fine now so this might take few seconds let's wait and see okay so just click over this URL just close the previous two and now let's check it if it's working fine or it gives some error so we can correct it if there is an error no worries so let's just click on home okay so we will direct you towards this home page which is dash one now click on video okay so now here as you know you can see that I told you uh we use plus form to get the input from the user so just click on choose file and just uh pass this byte start MP4 video and click on submit okay so now you can see that that's wonderful so now you can see that we are able to do the detections on the video and here you can see the results uh we have done the directions on the video and here we have the uh bounding box around the person bicycle we have the label person we have the confidence score as well so here ETV are detecting the traffic light and you can see that uh we have that label traffic light as well as The Confidence Code as well so that's impressive like you can see the detections on the video are very best reply uh the other V8 model is able to detect the bicycle and assign the label and create a bounding box and the confidence score so now you can see that we have created a bounding box around that connected object we have signed the label and the confidence score as well so now let's test on a video feed okay on the live webcam feed okay so just click on live webcam and see if you are able to do that actions on the live webcam feed or not let's get started okay so you can see over here we are able to do the attractions on the light webcam feed as well so the sections on the live webcam feed is being done okay okay so we are able to do rejections on the live webcam free as well uh so extracting the cell phone as well so the results are quite good and uh this is the tablet this is not a cell phone uh it's around action okay so you can see over here uh we have Canada complete flask web app and we have done detections on the video as well as on the live webcam feed and our results are quite impressive okay so now you can see that we have created a complete uh flask app or complete web app sorry okay and let me just go further so now you can see over here uh this is our complete web app we have the home page we have the video page on which we can do rejections on any input video plus we have the uh live webcam feed page as well on which we can do detections on the live webcam feed as well so that's all from this part in the next part we will see how we can train your V8 model on any custom data and how we can do detections on any custom data set and we will be creating a complete web app for that uh custom train model as well so let's see the next part bye guys in this video tutorial we will see how we can use your Adobe 8 for personal protective equipment detection so the data set I will be using in this project is available on roboflow publicly the data set consists of 3235 images and let us show you the data set so we have seven different classes which are marked as 0 11 3 4 5 we will change the name of this classes for example in the class number zero it's handmade so we'll change the name of the class 0 to help it the class 11 which is uh the shield you can see here the person is wearing a shield so it's directing that the person is wearing the shield so we will change the name of this class from 11 to shield and in the third class we can see that the person is wearing a jacket so we will change the name of this class from 3 to check it and plus we can further we can see uh the class forward the water is just detecting the bus so we will change the name of this uh class from 4 to Mouse in the same way we will change the name of other classes as well so like in this case we can see that in the ninth class the model is detecting uh that it's a class of boots so we will change the name of of this class from 9 to boards okay so in this way we will rename all the classes so that when the model detects handmade jacket or boot so we should know that the model has detected a jacket handmade or wood instead of it shouldn't appear in the detected object it is zero one two or nine okay so we don't want a numeric number we should we want the name of the class okay so let's start towards the implementation but before going towards the implementation let us see the model of health check so we can see that we have a total of 3235 images and we have each image is contains around in average four to five boundary boxes so in each image we are detecting four to five different classes or objects so we have around 14 341 anniversion in 3235 images and each image contains around four to five pounding boxes and that image size is average size is 40 416 plus 416 but we will resize the image to 640 cross 640 because our Euro V8 model is trained on 640 cross 640 images so it's always better to keep that size and here we can see that the class balance so we can see that these three last three classes are unbalanced so that the data set is not very much balanced you can see that some classes are in balance or you can say that subclasses are underrepresented which include class 4 6 and 11 okay so in the data set we can see that we have 2300 images for the training set 647 images for the validation set and 324 images in the testing side so our split ratio is 70 20 10 70 is for the trainee twenty percent is up for the validation set and ten percent for the testing set okay so to import this model into our basically collab notebook we will click on download and just show down stack of format so basically we will stack your V5 pytorch because Alternatives has also uh basically invented yellow V5 by Dodge so if we choose your v558 it's the same so so just click on uh show download book and just copy this from here and just click and copy so now uh we will just paste this code into our over collab notebook and in this way we will export this data set from here into our Google own app notebook so before running the screen please make sure that you have selected the runtime as GPU okay now we will take One Import OS basically import OSB plus we are using OS to create a helper variable over here so with that we can navigate to different files data set easily not just running distance first then we are using g-lop is used to return all five parts like if you want the import images bypass we can easily return using g-law primary then we are important in image and display basically we are using these two libraries to display any output image like predicted output image or an input image into our Google home lab notebook so it will display any output or input name image into our Google app notebook we require image and display library and using display.pr that Dash output function we are basically this function is used to clear output in the notebook so if you want to appear output in the notebook we use via Dash output function okay so just okay so we can leave it for now as well because because current TV don't have any output so we don't need this so in the first step we need to check whether we have access to GPU or not so we'll just run this cell and it will show us that whether we are using GPU or not so GPU memory uses so that's why so we are defining our basically run directory here so this is our home drag grid which is over here this all is our home directory so now we will install on electrolytics using paper store basically yellow V8 can be installed in two ways from The Source Point or VIP so because Jolo V8 is the first version of Yodo uh first iteration of yoga which has its one optional package then YOLO V7 they don't have their own official package so I'm gonna be ahead as it's one official package so use installing Alternatives we can install your log and you can use pip install Ultra retics it will install the Euro V8 version so if you do pip install Android text we'll install the Euro V8 version so there is another way you can install or increment where the V8 is to go on the GitHub repo of your Android fix your V8 GitHub repo so if you want to clone the increment using by cloning the GitHub repo you can also notice here is the code written to clone that photo capable so what I will follow the easy way which is the easy way is used to install Ultra detects we use the uh clone the GitHub repo in that cases where we need to make some changes in the code for example I want to add the speed estimation in the script in my prediction dot Pi file so that in that case I will go on the GitHub repo where I need to make some change in the predict.pi with a train dot Pi in this case I don't need to make change in the predicts.pi for the train dot pi over the validation strip so in the current state in the current project we are not making any change in the predictions we have training script or evaluation step so we are not running the GitHub repo if I need to add some more in the training validation or prediction skip then I will clone the GitHub repo so I'm using pip install Android it is to reinstall latest version of YOLO which is Yolo V8 so this will install the latest version of Uno which is Yolo V8 so it might take some seconds okay so now we will import electrolytics to check whether our YOLO V8 model installed and it's working fine or not if it's not working fine with uh and see what's the issue but it's working fine over here so okay now now we will import the PPE data detection data set this data set from robot flow so first we will create a folder over here so what choice is to click on new folder or so create a folder here but we will create a folder using mkdir and the name of the folder is data sets okay so you can change the name of the folder as well so now you can see over here we have the folder by the name of data sets Okay this is an empty folder but we will download the data set into this folder okay so just seeing what is our present working directory Now setting the current directory as this data sets folder so that we can download the data set directly into this data sets folder so just copy this from here and just remove this and just paste this over here so now you will be able to download the pp personal protective equipment of data set from Robo flow into your Google app notebook so the data set is being downloaded from here so it might take some time okay so it's downloading and it's around six percent seven percent uh because data sequence is still around 300.3 245 images so it will take some time to download so please bear with me until the data sets get download so it's at 42 percent currently and put 50 55 60 percent so in this way we will download our data set completely but it will take some time okay it's 90 and it's 99 okay the data set is downloaded over here and we can okay so if you happen some scenario then just need to reload this because if I click on a lot of files appear so I just need to reload it and it will work fine so don't need to worry in any case okay so just wait for few seconds and here is our data sets folder and here is our PPE detection data sets which consists of print test and validation okay and this is our data.yever file which consists of nine different seven different classes okay one two three four five six seven different classes here okay so first we just need to do one thing uh I have told you that we will rename this classes by their names of the object so I have done this so let me upload uh that upgraded file data dot yml file over here so just give me a minute I will upload this so guys you can see over here this is my updated data dot yml file so we you can see that we have a class name against each class like protective handmade Shield jacket dust mask ivr glove protective boots so it's not a number we have a class name against each class okay so we have downloaded a data set and just checking uh we the data set location over here Okay so so basically if you want to try and validate and run inference or model and we don't need to do any modification in the code for example I don't need to add any speed estimation or tracking or any other script then your local command line interface is the easiest way to do it so we are printing using your local command line interface to implement the training the validation and testing of our model okay so if you want to do detection you can stack loss is equal to that and if you want to do classification you can select loss is equal to classify or for segmentation you can select loss is equal to segment but we added to the doing detection over here so we have selected task is equal to the tech and mod is equal to drain because we have first grade training our model and here we have to send the YOLO V8 medium model and so on Okay so before we start the training we just need to make few chains over here we just need to rename this folder and just click on enter over here and just open a data.yml file okay and just open the data dot ymf file and just uh just go to train and just copy bar and just paste it over here and just go to valid copy this path and just paste it over here okay and just save it okay so just click one Ctrl s and now we are training our model for 90 box and we are taking our image size at 640 and here is our data dot yml file uh path over here and let's run the training so first we are downloading our model so the training will take around two to three hours but we will stop the recording and we will be back when the training completes okay so the training has started you can see over here the training has started but it will take quite some time so we will pause the video as the training gets started so you can see that the training has started so as the training complete I will be back and explain you the rest of the board till then see you then when the training computes so guys the training of the model has completed we have trained our model on 90 hitbox with image size 640 and here are the training results we get we have trained our border on 90 e-box so it took around uh three hours for the training to complete and uh we got we have got the best bits file and the 90th work last Waits while as well okay so we have seven different classes protective helmet shade jacket dust mask ivr and glove and protected boots so here we got the mean access Precision with iou50 and mean average Precision when IOU varies from 50 to 95 percent for each of the class so we can see that uh we have a very good IOU for all the classes like it's around 97.8 percent 66.1 percent and it's 92.8 percent for the jacket glass and Ford 96.8 percent for the dust mask and IOU for when IO varies from 50 to 95 with mean average Precision so we can see that here we have also got very word reserves as well so we have a store our desserts in Buddha grants detect train to and here we can see that here is the more weights file and here we have the F1 curve to check what different uh files or results we have in this folder just run this cell so you can see that we have the confusion Matrix uh so confusion Matrix basically terminals or model handles different classes we have the F1 curve Precision curve Precision recall curves and then we also have the report curve and then we have the results in the form of CSV so results dot CSV file show the performance of the model on each of the epoch in the results.png we have the training and validation losses and then we have the model predictions on the validation batches as well okay so let's first see what the confusion Matrix we are getting so confusion Matrix is the chart that shows how our model handles different classes so if we consider a jacket for example so we can see that 92 percent of the time our model detected correctly that a person is wearing a jacket why one percent of the time we get the bounding box but the jacket is incorrectly classified as the ideal while seven percent of the time our model your train fine tune your auto v8 model for PPE detection is unable to classify that there the person is wearing a jacket or border is unable to detect that the person is wearing a jacket so model does not detect anything although the person is wearing the jacket okay so you can see that while seven percent of the time when person is wearing the jacket the model is unable to detect it so in this way we can get information from the confusion Matrix here we can see that 96 percent of times our model has detected correctly that the person is wearing a handmade while four percent were dying when the person is clearing a hazard our model is unable to detect a helmet so here we have the graph for the training and validation loss so we have the graphs of training and reservation laws in the results.png so we can see that the loss values continuously decreasing so if we train this model on 200 to 50 box we can get some more better results when you can see the mean average Precision is continuously increasing and recall is also getting better so here you can see that mean average Precision curve is continuously increasing with io50 and IOU 5295 so our results are quite good so here are the model predictions on the validation badge so these images are not used for training so it's always better to have a looks like here model is just detecting that person is wearing the dust mask here the model is detecting that the person are wearing gloves on both of his hands and the person is also wearing protective boots as well so the model is working quite fine like here is not Vera gloves so here's and the model has not detected that the person has wear a glove while the person has here we have been wearing a glove so model is able to detect that the person is wearing a glove okay so let's validate the model validate our custom order so we are taking the best weights which we got over here best DOT PT and we will validate our custom model so similarly as before we are using command line interface to do the training so we will using command line interface to validate the model so previously you will Delight mod is equal to train so here we will write mod is equal to validate and we are performing detection so toss is equal to detect and here we are just passing data.ymn file bar which basically data dot yml5 contains our training set testing Dash and validation test images path okay so here we are validating our custom order so we can see that we also got very good result in terms of mean average Precision with either of 50 and mean average Precision when IOU varies from 50 to 95 so all the results are quite good so here we are doing inference mean with custom model so in front means a prediction that we can run on image to detect the label whether it is classification or or a bonding box or a segmentation so here we are testing model on the test data set images so here I passed that images for example let me show you here we have the data set and here we have the test images so we I have just copied this path and just paste it over here let me do it again so here I have just added it so here I have just added the path of my test data set images and this is the path of my best weights file which I am added over here you can see and I am not doing prediction so I am just doing more to prediction previously I was doing validation so I've written where when I was doing training so I was written train and the task is detect so now we have been detection of the object detection so I have written detection detect okay so just run listen and it will remove images so it might take few seconds to run then we will see what the results do we get so please wait for few seconds until it grants completely then we will see what results I'm getting so the model has run on the test data set images so they are 324 images and the results are saved in the prediction predict edit file as the results are saving and run detect predicted so just copy this path and just paste it over here so as we have total 324 images so we will not display all the 324 images what results do we get on all the 324 images instead we will only check the results on the first five images just setting over here so here to display output on the Google App notebook I have imported from ipython.display import image display so just run this send now and see what results do we get so it might take few seconds so please wait so here you can see that our model is able to detect correctly the protected boards the jacket the gloves dust mask the protected handmade so results are very wonderful here the model is also able to detect the jacket protective handmade and here the model is also able to detect a protective helmets which are all below the gloves everything and here the model is also able to detect ivr the dust mask and the protective helmet as well so the results of the model are quite impressive so guys next test over model on some demo videos and see how our model performs on the demo videos so here I am downloading a demo video directly from my Google Drive so just run this cell the name of that video is demo.mp4 so it might take few seconds to download the videos downloaded so now let's test our model on this demo video I have passed my best model widgets over here I have set the confidence to 0.25 North we are performing object detection and we are doing prediction or an air is that demo video path so just run this cell and see what results do we get so it might take few seconds to run this demo video so because it will process the video frame by frame so we can see that the model is detecting the protective admits jackets and the frame rate is 27.2 milliseconds okay so it's 70 80 90 the this is a progress saying we are using GPU storage a bit fast then CPU CPU in CPU it takes quite some time to put to draining and to do prediction so it's always better to do a test over model on GPU and Google code apps offers some free GPU as well so I'm using Google app free GPU okay guys so it might take some more time although we can see that the model is detecting and protective handmaids as well as jacket so our results will be saved in runs did that and trained on our interprediction too and this is our output demo video Okay so as this completed okay so our results are saved in runs detect predict too so as our model this pile is a bit large so I think it would not be able to display a hair so it's better to download this demo video from here and see what results do actually we get okay so it might take some time to download so I will pause pause these videos as it downloads I will be back and then we will check what the output do we get I was able to download the camo video and let me play it showing the output so our model is able to detect the jackets protective jacket the protected handmade you can see that the both person protected helmet and the jacket is detected although the model is not detecting the gloves but the model is able to detect the jacket as well as the protective handmade so next test or model on some other demo videos and see what results do we get over there so here we will download the demo video too and see what results do we get on that this demo video so let me test this power model on this demo video too actually the name of the video is demo 3 so let's run this so it might take two seconds or do it execute but we can see that the model is detecting two protective Health minutes one jacket okay two protective helmets one jacket true protective handmaids one jacket it might take some ah few minutes to run and then we will download the output and then we will see uh what results we get our output will be saved in the prediction 3 and this is the name of our output video demo3.mp4 okay so I will just download from here so it might take some time to download as the download complete I will be back with the output video so guys I was able to download the output demo video and let me play the output demo video so you can see that the model is detecting protective handmade jacket here also the model is detecting the jacket the protective handmade the persons are not wearing the gloves so the water is not detectedly okay so let's test our model on the demo video free and see what results do we get over there okay so let's test our model on the demobility free so I am actually not displaying results over here because these file size are dark so it will not be I will not be able to show this output demo videos over here I have already tested it so not wasting time or to show trying to show the results over here so downloading the demo video free from my drive directly into the Google app notebook so it might take some time and then I will run the model only demo video 3 and see what actual results do we get so it might take some time so okay so model is detecting to protective hazmat's one jacket two protective handmaids one jacket uh two protective helmets one jacket fine dashboard it's the model is checked into protective headmates one jacket okay so it's total 337 frames we have and it's currently we have processed the model has processed 177 frames okay so 206 frames 220 24 45 to 56 to the XT 262 over 273 to 95 and 299 and it's around forward so let me download this output demo video and show you what results we get the output is saved in prediction for demo 4 let me download this output demo video and then I will pack with the results so Guys these are the results from my output demo video three so we can see that the border is able to detect the protective handmade jackets as well so this person is not wearing the jacket so the model is not detecting the jacket while this two persons are wearing the protective has made so the model is able to detect the protective habit and this person jacket now let's test our model on the live webcam although I'm not wearing a PPE protective personal protective equipment but I want to see if I wear a mask the border is able to detect it or not while on the videos and images our results are very good so let's test our model on the live webcam in the next part of the video thanks for watching foreign we have seen that how we can train the yellow V8 model on personal protective equipment data set and after training for fine tuning the YOLO B8 model on personal protective equipment data set we have a tested our train model on multiple videos and we have seen that the model depicts a very satisfactory performance and the model was able to detect the protective Chad made jackets I wear gloves so now uh as we are discussing how we can integrate Adobe 8 model with flask so now I will just use the best weights file so I have just downloaded the best weights file of my Prospect personal productive equipment model so here you can see that here I've just have the best weights by after training the yellow V8 model on the personal protective equipment data set so this is my best weights file I will just update this best DOT PT file with PPE dot PT so I've just updated the name of my best weights file as PPE boss personal reproductive equipment data set now I'd go back towards the code so all this code is saying that we are discussing already so what changes I will make is first of all uh previously uh when we are uh previously when we created our last web app so we have used the uh pre-trained yellow V8 model so the pre-trained uw8 model has been trained on Coco data set which consists of 80 different classes but now as we uh we have trained or fine-tuned the yellow V8 model on custom data set which is of personal protective equipment so we have seen that our personal uh in the personal personal protective equipment data set we have seven different classes so I have listed all those classes in the sequence wise over here which you can see okay and this is uh here I've just passed the pp.pt bits file which I have over here so this is the best weights file which I have renamed as ppe.pt okay and it is placing the yellow weights folder and these are the class names over the classes which I have in my P B data set okay I have everything else is same except what I made further change is that uh previously you can see that with for each bonding box we have a pink color okay so and I've done the color of the label rectangle was also pink but here we have the strangest color so I have just said that you if the class name is dust mask so if you have a dust mask class then this color will be assigned to the bounding box as well as to the label rectangle and if the class is Bluff then this color will be assigned to the bounding box as as well as the label rectangle and if the class name is protective Helmet or if the class is protective helmet then this color will be assigned to the bounding box and the label rectangle and if if there is any other class like for example if we have shield for jacket last then this color will assign to that class which I have given over here and plus I have set a limit for the confidence board like the bounding box and will only be drawn around the detective object if we have a confidence score above 0.5 so if we have a confidence score above 0.5 then only we will have the bounding box and the rectangle above the bonding boss on which we can put the text okay on the ht3 HTML files are same and the flask app.pi is also the same there is no made change there is no change made in the floss app dot Pi file so let's run the plus cap.bi file and see what results you will got we will test the video on the we will test uh the train your V8 model on personal preview equipment data set one uh videos as well as another like webcam field so let's get started so I'm just running the python class cap.pi file okay so this might take some time so let's see so it might take some time for it to execute so so now I will just click over this link so it will redirect me to the home page okay so just close one tab because we don't need two tabs currently so now we have the home page now let's first test on some in videos so I will just click on choose file and then I will just go to I don't know we ate it crash course and in the video section I have some videos let's first test on this demo video and just after uploading the video click on submit so now we will see if our model is able to detect a jacket protective handmade gloves chained or not Okay so let's see what results do we get okay uh the reductions are about to start so now we can see that our model is able to detect a protective helmet you can see as well as our model is able to detect or our web app is able to detect the jacket as well so you can see over here we have detected a jacket as well as the jacket so although they are building gloves but we are not able to detect it but we are able to detect the jacket and as well as the protective helmet now let's test on another demo video let's upload this video and then click on subnet and see what results do we get so this might take few seconds so now you can see here this person is not wearing a jacket so we have not directed a jacket or this person is wearing a jacket and we have a bonding box around a jacket you can see over here Plus or both of these persons are wearing protective helmet and we have detected that both of these persons are wearing a protective helmet plus although they are not wearing any gloves and shields so we have not detected any gloves or sheet they are only wearing protective handmaids and jacket which we have detected successfully so that's impressive uh now let's test our uh model London live webcam feed to test the model on live webcam feed I will just go back and just click on live webcam from here and let's test our water on the live webcam feed and see what results do we get from here so my camera is just going to turn on and let's test our model on this so now you can see over here our modern is able to detect successfully that I'm reviewing a dust mask from here okay so the reduction fine so as I've been building uh wow it has directed successfully that I am wearing a dust mask thus I'm not wearing a jacket or a 10 minute so it has not directed anything from here so I'm just reading a dust mask which it has detected successfully okay so now you can see that we have a tested our model on the video as well as on the live webcam feed and the results are very satisfying so that's all from this course I hope you have learned something from this course see you all in the next post till then bye bye
Info
Channel: Muhammad Moin
Views: 14,376
Rating: undefined out of 5
Keywords: yolov8, objectdetection, flask, yolov7 flask dashboard, real time object detection, webcam, web application development, yolo, computer vision, deep learning, python, webapp
Id: xzN_aG917-8
Channel Id: undefined
Length: 126min 44sec (7604 seconds)
Published: Fri Apr 07 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.