Deep Drowsiness Detection using YOLO, Pytorch and Python

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so a lot of you have been asking me to do something around yolo and yolo object detection so what i ended up doing is testing this out for drowsiness detection this is what it looks like [Music] [Music] [Music] what's happening guys my name is nicholas renaud and in this video we're going to be using a yolo for object detection and specifically we're going to be using it to try to detect when we're drowsy and when we're wide awake now the particular model that we're using specifically the yolo model that we're using is a little controversial it's not the official yolo version but that being said it is a ridiculously useful production implementation of the yolo algorithm so we're going to be going through yolo today let's take a deeper look at what we'll be going through alrighty so in this video we're going to be going through four key things so specifically we're going to be using the ultralytics of yolo now i may have got into a little bit of a fight on reddit about whether or not this is the official yolo implementation but that being said it is a really really good production implementation which is built using pi torch so we're going to be leveraging that today what we're then going to do is leverage the base model to perform some detections and i believe it's trained on the coco data set so you're going to have roughly 80 different classes that you're able to detect right off the bat and then what we're going to do is fine-tune our drowsiness model so we're going to be able to detect whether or not we're drowsy or whether or not we're wide awake and pinion so then what we're going to do is actually test this out in real time so we'll be able to leverage our webcam to be able to perform real-time detections to determine whether or not we're drowsy so you could take this away and potentially implement it in a car or vehicle to make sure that you're not dozing off potentially trigger an alarm or something off the back of this to make sure that you're a lot safer on the road let's take a look as how this all going to fit together so first up what we're going to be doing is installing the ultra lytx yolo package so this is a implementation of yolo so it's not the official yolo v4 implementation it's called yolo v5 but again let's not comment about that reddit fire that i had so what we're going to be doing is leveraging the ultralytics yolo implementation to be able to fine tune a drowsiness model we're going to use a label image to label our images so if you've seen my large tensorflow object detection tutorial going to be pretty similar as to how we actually go about labeling our images and then last but not least we're going to be able to make some detections as to whether or not we're awake or dress ready to do it let's get to it alrighty guys so in this video we're going to be going through drowsiness detection now in order to do that we're going to be going through six key steps so first up what we're going to do is install and import our dependencies and we're really going to be using pytorch quite a fair bit for this then what we're going to do is a load up that model that we're going to be leveraging so again we're going to be using a yolo or what's known as yolo v5 again it's like an unofficial i guess extension of yolo v4 so we'll be taking a look at that then we're going to be making some detections using the baseline model which has been trained on coco so if you haven't heard of coco before basically it's like this open source data set which i think has around about 80 different classes so the nice thing about this implementation is that you can try it out you'll see some detections without even fine-tuning but then what we're going to do is we're going to collect some images using our standard image collection code train a model to detect whether or not we're drowsy or not and then we'll actually go and make detections in real time using our webcam so again we're going to bring it all together and basically have a fine-tuned drowsiness model by the end of this now the first thing that we need to go ahead and install is pi torch so if we go on ahead to pytorch.org what we can go ahead and do is just go to the install section right over here hit install and then what we're going to do is choose the version that we're going to install now we can choose stable 1.9 i'm going to choose the long time long term stable version which is 1.8.1 because that's worked for me pretty straightforward but again feel free to use whichever one you want and then what we're going to do is choose what os we want to use so i'm going to use windows and then we're going to choose what package so pip let me zoom in on this so you can see it a little bit better it's a package we're going to use pip and then we're going to use python and then i've got cuda 11.1 installed on my machine so i'm going to choose that if you don't have a gpu that's fine it's just going to take you a little bit longer to actually go ahead and train this but it's all good so what we're going to do is choose the different options that we want and then we can copy this down and jump back into our notebook and let's kick off with step one so what we're going to do is first up install pi torch i'm just going to include an exclamation mark paste that in and then i'm gonna get rid of the three here and run that and i believe i've got pytorch pre-installed in this yolo kernel already so we should be good to go yep and as per usual all this code available is going to be available inside of a jupiter notebook via github so i'll include the raw stuff that we write in this tutorial as well as the refined code so you can have a cleaned up version but for now let's keep going so we've got pytorch now installed now the next thing that we need to do is actually go on ahead and grab this ultra lytics yellow v5 model so if i type in your low v5 i think there's only one out there uh okay maybe not should be your low v5 all right here we go so this is the ultralytics yolo v5 implementation now again as i was saying in the intro there is a little bit of controversy around the naming of this particular model particularly because it's not like an official branch of the yolo v4 implementation but i found this and it actually worked really really well particularly for like production implementations and particularly for practical implementations so i wasn't intending on actually making this true toilet around drowsiness detection but i was like hey let's give it a crack and see how it goes and the end results were actually really really surprising so again a little bit of click bait you stick around to the end and you'll see the full results but again it's really really good okay so what we're going to do in order to get this is we're going to clone this repository so you can see that it's got a git clone command there so we'll clone this and you can see here that this is basically what we're going to go ahead and do so we're going to clone it we're going to cd into it and then we're going to install the requirements for the yolo v5 implementation so if we copy this i'll show you how to do it in a note and jump back into our notebook add a new cell so i'm going to type in exclamation mark and then git clone that so it might take a little while i've already got it cloned so i don't need to go on ahead and clone this but assuming that i didn't have a clone it's going to go on ahead and clone down so let me show you what it looks like so if you get this error here that basically is just a warning saying you've already got it cloned or you've already got it there so in my folder you can see i've already got a folder called yolo v5 and this is basically what you're going to get cloned down so if we go and take a look at that everything that you see over here has just been cloned over here so basically we're now going to be able to leverage a yolo v5 implementation but before we do that we need to do one more thing we need to install the dependencies inside of this requirements file so what it's going to go ahead and do is just make sure that we've got all of these different dependencies or libraries modules whatever you want to call them it's just going to make sure that we've got all of these installed so let's go ahead and run this now i believe we've already got torch and torch vision installed it's going to go ahead and do the rest for us so let's go on ahead and do that so we can go on ahead and grab this command here so pippin saw our requirements.txt and we're just going to include another command so first up we need to cd into that folder so cd yolo v5 and then what we want to do is pip install dash our requirements of text so effectively what we're doing is from our command prompt we're sitting into that yolo v5 folder and then installing those requirements that you saw in that requirements.txt file let's run that looks like we've already got them all installed no errors there and that's pretty much the core install now good and done so let's quickly recap so what we've done is we've installed pytorch through this line here and remember we just passed through our settings that we wanted went and copied the command in tien ren it we then went and cloned that repo and then we went and installed the stuff inside of requirements.txt from here on out it's all reasonably straightforward in terms of what we need to do so there's not too much cloning and installing it's really uh apart from label image we've got to do that as well but that's all good we'll walk through it so now that we've installed a bunch of stuff what we now need to do is import it into our notebook so let's go ahead and do that okay that is our initial set of dependencies installed or imported into our notebook so we're going to import a bunch of stuff later on as well but for now those are our four key dependencies that we need to go on ahead and load our model make some initial detections so what i've gone and read in there is import torch then i've gone and written from matplotlib import pi plot as plt so this first line import torch is going to import pi torch the second line is going to import matplotlib so we can actually do a little bit of rendering then i've gone and written import numpy as np and this is just going to help us with a little bit of data transformation so if you haven't worked with numpy before basically it's a really really powerful array transformation library from python so again super powerful you can do a bunch of stuff with that and then we've gone written import cv2 so this is going to import opencv into our notebook and allow us to get started okay now the next thing that we need to do so that's pretty much step one now done so we've gone and installed pi torch we've gone and cloned our repository from github and then we've gone and installed the dependencies from the requirements.txt file so and then we've also gotten imported a bunch of dependencies into our notebook now the next thing that we need to do is actually go ahead and load our model so let's go ahead and do that okay so i've gone and ridden one line there now let me explain what this is actually doing so this is actually loading the pre-trained ultralitics model from torch hub so if i go to torchhub and this is sort of like tensorflow hub it's the pi torch equivalent and if we actually go and search for uh how do we go and search on here let's try uh ultra lytics here we go yellow v5 so what it's basically doing is it's downloading the pre-trained model from pytorch hub so this is sort of a place where models can be hosted and we're actually going to be using the baseline model now there's a couple of different models that are actually available through this there's a small model a media model and you can see them there small model medium model large model and an extra large i believe it's an extra large model and it sort of shows you the comparisons between each of these different models so we're going to be using the small model to begin with but again if you wanted to use a bigger model or a more hardcore model you definitely could now what we're going to go ahead and do is actually load that up so what i've written is model equals torch dot hub dot load and then i've passed through the package that we want to use so ultra lytics forward slash yolo v5 and then i've passed through the actual model that i want to use in this particular case because there's those different versions of yolo v5 remember there is the small medium large and extra large version so we're going to be using the small one to keep things lightweight and quick and then what we've gone and done is by downloading all of that we're going to store it inside a variable called model so the full line is model equals torch dot hub dot load and then the first parameter is ultra litix forward slash yolo v5 and then the second parameter or second argument is yolo v5s so if i run that now it's going to go on ahead and load it up and you can see it's picked up my gpu over there still waiting on those 30 90 prices to go down guys all right cool so that's now gone and downloaded and we've now gone and loaded it into our notebook so you can see that we didn't get any errors there so if we go and take a look at our model now you can see that you're actually able to take a look at all the different layers that are available inside of this model and you can see it's massive right so we've got a bunch of convolutions plus i believe this was a sigmoid linear unit there's a whole bunch of stuff inside of this and there's a bunch of prediction heads coming out of this as well but all good we don't need to delve too much into the architecture again i'll include a bunch of links about this below as well so you can take a look at it what we're now going to go ahead and do is make some initial detections so say for example we wanted to detect some stuff inside of an image we can actually do that so first up what we're going to do is grab an image and i'm going to grab the image that they use in the official documentation so let's try that up so we're going to create a new variable called image and then pass through a link okay and this should be dot com so what i've gone and written there is image equals and then i've passed through a string and again this is just for our initial baseline detection which is going to be off the coco data set so if you actually type in coco classes let me show you this so there's a bunch of different classes that you've got available within coco right so you've got person let's zoom in on this person bicycle car motorcycle airplane i actually started testing this out for my object tracking tutorial which is coming up in i don't know a couple of days and it actually picked up the cars really really well in the traffic video so it's actually pretty cool so we might actually switch this out later on but for now you can see that there's a bunch of different classes that are available inside of this baseline model so we're going to be leveraging those and again i'll include a link to this as well in the description below but for now let's go ahead and make some detections so what we can do is we can pass that image to our model so i can type in results equals model and then image and then we can print out our results results.print okay so you can see that we've got some results and let's actually take a look at the line of codes or the two lines of code that we wrote and then actually take a look at how this worked so we've written results equals model and then to that we've passed through the image that we want to make detections on and that image is just a string so it's just this over here and then i've written results.print and this actually prints out the results from that object detection model but it doesn't render so i'll show you how to do that in a second so what it's actually gone inside is it's image 101 so we've only passed through one image and these are the dimensions of the image so it's 720 but let's zoom in on that 720 by 1280 it's detected two persons so two people in the people class and two ties so these people must be wearing ties it's also given us the speed so in this case it's one thousand one hundred and so one thousand three hundred and seventeen milliseconds and then it's also given us the time for inference uh the seconds per image shape so again it's giving you a bunch of information over there now at the moment we're not actually rendering anything so we might actually want to do some rendering so let's go ahead and do that and there you go so we've now gone and drawn our different classes so you can see there that the image is a picture of zinedine zidane and you can see that it's gone and detected the class of person for him so it's all the way through his body it's also detected his tie and again i don't know this guy is probably some soccer dude i'm probably gonna catch some fire in the comments for that but it's detected the class of person and it's also detected a bit of his tie as well so let's actually break this down a little bit so what we actually get out of the results from this model so if i type in results you can see that right now this is just the base class but if we actually type dot and then tab there are a whole heap of different components out of this so you can see that we're actually able to get the y x y coordinates and this gives us all of the different coordinates for our particular model so you can see it's got x y x y i believe that's the case might actually have the confidence matrix as well but this is giving us the coordinates for our particular model what else are we able to get um so if we type in dot show what's that going to do oh that's not actually oh that's actually opening up the entire image inside of in completely separately i haven't actually played with this one but that sort of gives you an idea as to what's possible you can also type in results.render and this is actually going to return the image from the image with the detections drawn on it so again it's really really quick it's able to actually draw that image so what we've actually gone and done in order to render it like this is i've first up and made sure that we can render in line using matplotlib so i've used some inline magics i've written percentage sign matplotlib inline and then we've gone and used the matplotlib i am show function to actually go and render so if you actually take a look at the shape of this model that doesn't have a shape um what do we have here so let's put it in a numpy array and if we type shape you can see that it's encapsulated in another set of arrays now in order to render using matplotlib we need to basically squeeze this and extract just these components so we want the 720 by 1280 by 3. so if we change this numpy dot array function to numpy.squeeze you can see that we're going to get that and then to actually render it we can just type in plot.imshow which is a built-in matplotlib function and get rid of dot shape and this is going to allow us to render our image so in order to show it the first time i've just written plot dot show as well so that allows us to actually go on ahead and render our image pretty cool right so again this is really really lightweight and sort of shows you what's possible in terms of making detections now let's find an image of car traffic so car traffic i personally think this is super cool so you can actually extract a whole bunch of classes from cars let's get one where you've got some defined cars let's try i don't know this one and if we open our image in a new tab so we just want the uh copy image address okay that should be good let's see if that works so if we grab that link now and sub it out over here it's a giant link see if it works okay so it's actually gone and detected those classes and again all i've needed to do in order to go and make detections on a completely different image is pass through the new link so what i did is i went and found an image and just updated this image variable over here and by running this next line you can see results equals model.image so we've gone and passed that image to our model again and saved the results inside of a results variable and then by writing results.print we're able to get all of this metadata here so you can see it's detected 38 cars mouse is lagging detected 38 cars and it's also detected four different trucks and you can see it's given us our speed so it's actually relatively quick now if we go and render this image how cool is that so it's actually rendering all of the different classes that's actually gone and been detected so it's detecting all the cars it's detected the trucks pretty cool right and with reasonable confidence intervals all the way out to there i think this is personally it's like one of the coolest models that i've worked with and again we can run this function down here this one's just sort of a copy of that but just sort of explaining how it works but that shows you how you can build baseline detections using this particular model cool now we could also do this with opencv as well and do it in real time so let's go on ahead and do that so i should probably say that that is step three now done so we've gone and made some initial detections so we've gone and captured our images made some detections using an image we've gone and rendered them as well but now what we're going to do is go into step 4 where we're going to do it in real time now for this we're going to use a pretty common opencv loop where we're actually able to access the image from our webcam and we'll do it with our baseline model to begin with so let's do this okay so what i've gone and written there is if you've ever watched any of my standard computer vision tutorials this is going to look super familiar to you it's just accessing our webcam and it's going to allow us to make detections in real time so what i've written is cap equals cv2 dot video capture and this is just accessing our webcam because my webcam is video capture device zero but again if you don't get a real-time feed just play around with that it might perform or you might need to change it from zero to one to two so on my mac it's video capture device two on my windows machine it's video capture device one and then what we're basically saying is we're going to loop through the feed from our webcam so what we're first checking is whether or not our capture device is still open so while cap is opened and then we're reading our capture so cap.read and we're extracting or unpacking the variables from that capture device so red comma frame equals cap.read and what you're actually going to get back is a return value plus the frame or the image from your webcam and then what i've written is cv2.iamshow passed through the name of the frame so this is when you actually go and render you'll get a little pop-up and the top bar is going to be named whatever you pass here so if i wanted it to be called yellow v5 or ultra lytics or drowsiness detection all you need to do is change this variable here and it will change what's rendered then the next argument that we've passed through is frame then everything below that is all to do with exiting out of this gracefully so what we're effectively checking is between each couple of frames whether or not we're pressing the q key on our keyboard so in order to do that we're checking if cv2.weightkey10 and 0xff equals equals order q so that's what we've written in order to do that check and if that check passes then we're going to break out of this loop so we're going to break out of this while loop and then we're going to do some cleanup so we're going to release our capture device so release our webcam and we're also going to destroy all the windows so you'll see that we'll get a little pop-up we're going to destroy that so let's go on ahead and run this and this isn't actually making detections as of yet this is just accessing the real-time feed so let's run this make sure it works we should get a pop-up all right so we've got to pop up you can see my head so no issues there so that's looking all well and good for now but what we actually want to do is make some detections using the yolo model right so let's quit out of this and let's update this code so there's just a couple lines that we need to update in order to make our detections okay so there's a first change that we've gone and made so first up what we're actually going to go ahead and do is make some detections so in order to do that we're going to use our yolo model and we're going to pass through the frame that we get from our webcam so remember when we run cap.read we're going to get a return value plus our frame back now in order to make detections what we need to do is pass that frame to our model and we're going to get our results so no different to what we got up here now the last thing that we need to do is actually make an update to our cv2.iamshow function to actually go on ahead and show those results so let's do that so what we're going to do is run results dot render and we're going to wrap that inside of our numpy function which is going to squeeze it so mp.squeeze let's make sure we've gotten enough brackets yep okay cool so that is the only change so before here what we had is let me just cut that out so we had frame instead of that what we're going to do is we're going to use the results.render method because remember the results.render method only returns the image array so if i type in results dot render so what we're getting back out of this is the numpy array represent or the array representation of the image in order to actually render it we actually need to pass it through to a rendering function in this particular case we're going to be using cv2.iamshow to do that and remember we need to squeeze out the results so we need to extract it out of that big array and if we go and run this now we should get some results so let's try it okay so if you get a little pop-up and it closes perfectly fine just go ahead and run it again cool so you can see that that's working pretty well it's detecting this as a cell phone but if i actually throw up a cell phone you can see it's detecting that as cell phone on the headphones does it work what's it saying it's calling that a remote app it doesn't want to do that i'm trying to find something else uh what about ipad saying that the ipad is a chair that's cool what about if we took the green screen down so you can see it's detecting a bunch of stuff in the background so it's detecting the couch it's detecting that i've got all the chairs on my dining table it's even detecting the potted plant over here which is pretty cool as well so you can see that really really quickly you're able to get up and running and testing this out now at the moment we're only doing it using real time and we're not doing it using a fine-tuned model so we might want to improve this a little and actually fine tune and build something actually specific but before we do that i want to show you one more thing so right now we're doing it using our webcam but what if you wanted to do this using say a video let me show you how to do that so in order to quit out of this frame all you need to do is hit q on your keyboard and that's going to close it down now say for example we wanted to do it using a video now i've got a video of some traffic data that i was looking at recently let me show you what that looks like so it's called traffic and that is ridiculously tiny there you go so you can see it's just an image of a bunch of cars now say for example we wanted to perform our object detection on that video how would we go about doing that so the file name is called traffic.mp4 so all we need to do in order to make detections of a video instead of off our webcam is to change this value here so if i type in traffic dot mp4 you're now going to get detections from that video i've got the pop-up it's closed let's try running that again so you can see it's now opening up with our video and you can see all those detections in real time and look at how quick it is it's really like honestly for a open source implementation this is so so good even though it might not be the official yolo v4 model look how good this is it's detecting all of the cars in the frame and it's ridiculously quick as well pretty cool right so i mean you can see that there are a ton of possibilities with this so right now it's just trained on the base coco model but what happens if we wanted to go and train this on something else for example say for example we wanted to go and fine-tune our model you can definitely do that as well but for now just look at the speed of this thing i mean it's pretty cool right you'll make this bit bigger doesn't want to go bigger not too shabby right all right enough taking a look at that let's close that down so all right so what we can do is sub this back out and leave this as zero so that is step four now done so we've now gone and performed our real time detection so if we take a look at what we've done so far we've sort of sped through this so we've gone and installed and imported our dependencies we went and loaded our model from torchhub we then went and made some detections remember all we needed to make detections with images probably update this with images so what we then went and did is we grabbed a link past that to our model and was able to make detections and then we made real-time detections both with a video and our webcam now what we want to go ahead and do is actually train a custom model so rather than it just being trained on the coco classes say for example we wanted to go and train this using some custom labels what we're going to go ahead and do is now train our drowsiness detector so first things first what we need to do is collect some images and then label them so i'm going to copy this loop over here and we're going to use pretty much this same loop to go ahead and collect our drozy and non-drowsy images so in order to get started with that we need to import a couple of additional dependencies so let's go ahead and do this okay so i've gone and written three different lines of code there so first up what we've gone in written is import uuid so import uuid and then we've gone and imported and we're going to use uid to create a unique identifier unique identifier and this is going to be used to actually name the images so we're actually going to collect images in real time of myself to go and perform this drowsiness detection so we're going to do that and then we're going to use import os to actually leverage or work with our file paths just makes it a lot cleaner to do that and then we're going to use time to take a little bit of break between each one of the images that we actually go on about collecting now what we can actually do is update this code here to not just capture the the real time feed but also save the images for our different classes so let's go on ahead and make these updates actually you know what it's probably going to be cleaner if we write this from scratch i'd rather do it from scratch let's do it from scratch okay so let's go ahead and do this let's add in a couple of cells below so we can do that okay let's do it okay so before we go and write up our image collection loop we need to define three key variables now if you've watched the full tensorflow object detection course that i made this is going to be really really similar so first up what we're going to do is create an images path and this is where we're going to save our images or effectively our image data and in order to do that i've written images underscore path equals os dot path dot join and then i've specified what we want our top folder to be and what we want our subsequent folder to be so basically all of our images are going to be saved inside of a folder called data forward slash images so it's effectively a subfolder of our data folder wow that was different it's going to look like this so data forward slash images like that julio all right and then we're going to collect two different types of images so we're going to have two classes that we're going to work with we're going to have awake so basically me looking alive and on it and not what i'm looking or feeling like right now and then me looking drowsy so like a little bit down and disheveled and maybe closing my eyes and then the third parameter or the third variable that i've gone into and actually let's take a look at labels so in order to do that i've written labels equals and then inside of square brackets i've just passed two through two strings so first one is going to be awake second one is going to be drowsy then the next variable that i've specified is the number of images that i want to collect so i've written number underscore images or imgs equals 20. now what we can actually go ahead and do is actually we haven't actually run these cells so what we can go ahead and do is actually now leverage these pieces of information so what we're going to do is we're going to loop through our labels and so we're going to collect images for awake first and then we're going to collect images for drowsy and for each one of these labels we're going to collect 20 images for each then eventually what we're going to go ahead and do is label them using a library or a package called label image so i'll show you how to get that set up as well but for now let's go on ahead and start writing this loop okay so those are our first four lines of code written so the first line that i've written is cap equals cv2 dot video capture capture and then i've passed through video capture device zero so again no different to what we're doing up here then the next line that i've written is the kickoff of a loop so for label in labels so we're just going to be looping through our labels over here so if we do that for label in in labels so if we went and printed out a label right so effectively all we're doing there is we're looping through each of our labels then what we're going to do is i'm actually going to print this out so we'll actually see when we're collecting or transitioning between labels so i've written print and then inside of a string so i've written collecting images for and then i've put through some curly braces so this is going to allow us to perform some string formatting and inject our variable into it then i've written dot format and i've passed through our labels so this is effectively just going to be printing out the different labels inside of a nice string so if i go and run that this should be label good pick up so it's going to be printing out first up collecting images for awake and then collecting images for drowsy then what we're going to do is we're going to sleep for five seconds when we're transitioning between each of our different labels then what we want to do is we want to loop 20 times now so we're going to loop through each one of the images that we want to collect so let's go ahead and do that let's add some commentary as well so this is the first loop loop through labels this is the second loop loop through images or image range let's do it okay so that's the next three lines of code so i want to sort of take this step by step so we don't just fly through it and i'm not explaining what i'm writing so what i've then gone and done is we're going to then loop through all of our different image numbers so remember we've got a range that we want to loop for we want to collect 20 images for each one of these classes so i've written so 4 image underscore num in range number underscore images which is going to be looping through this value and then we've effectively gone and applied a little bit of string formatting similar to what we did up here but this time we're going to be printing out collecting images for and then we're going to print the label and then the image number so what image we're actually up to and then no different to how we accessed our webcam before i've written red comma frame equals cap.read so we're going to read the feed from our webcam so if we take a look this is going to be akin to doing this so let's copy this put that inside of there so you can see it's going to first up print collecting images for awake and then print collecting images for awake image number 0 all the way up to 20. so it starts at 0 but ends at 19. so 20 images then what we're actually going to do now is we're going to access our webcam we're going to take a picture so sort of like taking a photo and we're going to save that to our disk and remember we're going to save it inside of our data folder and specifically the images folder inside of that folder so let's go ahead and do that and we should probably also actually create that folder as well so let's go ahead and do that first up so i'm going to go into the main folder that i'm working so yolo v5 and i'm going to create a folder called data and then inside of that i'm going to create a folder called images cool let me zoom in on that right so if we take a look so this is my top level folder so if i go into data and images all of our images are going to be written here so remember you just need to create a top level folder called data and then the secondary one called images okay now let's go ahead and finish this off so we're going to first create a unique name for our image then save our image and then we'll actually wrap this up okay so we've gone and ridden what is that four different lines of code there so i've gone and applied these new lines so i've written image name equals os.path.join and then to that we've passed through the images path so remember when you actually name this image or in the next line we're actually going to write it out we need to pass through the full file path to this particular image so what we're effectively doing is we're passing through the initial file paths of where we're going to store these images and then we're just creating the name of the image here so let's actually take a look at what this would look like so if i copy this paste it down here and print out our image name so what it's going to do is it's going to create so again this is what we wrote in the first couple of lines this line is going to create a full file path to our image so data comma images and then the first image is going to be called awake and then we've got this weird number here actually we need to pass through jpeg as well uh so we need to plus dot jpg that should be better there we go should be that let's actually replace this line before we forget so what we've actually gone and written is or what is actually happening here is we're grabbing first up our images path so data comma or forward slash backwards images and then we're specifying the name of the class a unique identifier so they don't overlap or overwrite each other and then we're passing through dot jpg so this is going to be the file extension and in order to do that i've written image name equals os.path.join and then we'll first up pass through our images path and then this is where the magic happens so first up we're passing through the name of the label then we're passing or appending a dot to that so i've just used pluses in this case you could do string formatting probably be a little bit cleaner so plus dot plus string or str and then this uuid function here is going to be what creates this component so that over there then i've written plus and then the dot jpg so this is going to append the jpg bit down there so let's actually take a look at that so effectively we've got so let's say a label is going to be drowsy in this particular let's grab one label so if i type in labels.0 it's going to be awake so what we're effectively doing is we're going plus dot so that gives us let's close this so we don't need to see that so plus dot and then plus a uuid so we need to wrap that in a string so uuid dot uuid 1 right so that's giving us that bit and then we're adding the dot jpg and that gives us our full file path now if we go and append data what is the images path and we need a forward slash but we've gone and used the os stop path so that's effectively giving us that now that's obviously not all that clean so if we use os.path.join it's going to be way cleaner that is how we're creating this over here now you saw that when we printed it out we didn't get that double backslash that's just a function of the os.path library so you can see it gets rid of those double backslashes which are effectively the escapes okay so that is that line explained then the next line sorry detailed explanation but i wanted to go through that so the next thing that i've gone and written is cv2 dot i am right and then to that we'll pass through the full file path to our image that we're going to be writing out in this case it's image name and then we've gone and passed through a frame so remember our frame is what we're capturing from our webcam so this is our webcam feed this is naming our image path so cv2 dot i'm right is going to be what actually writes out our image so writes out image to file and then this down here or this cv2.iamshow function is no different to what we used up here to show our initial results so this is going to render to the screen and then i've included a time.sleep function to basically give us a little bit of time to move around between each image being collected so this is going to give us a two second delay between captures so in order to do that i've written time dot sleep and then i've passed through the value 2. so no different to what we did up here where we pass through time dot sleep 5 and again if you wanted a shorter break between the image captures you don't need to pass through two or five you can drop it to whatever you want but it's in seconds so you can adjust it as needed then the last thing that we're going to do is we're actually going to copy this breaking gracefully code and append this down here i'll put this down here and that is pretty much our loop done should this be this should be tabbed in like that cool so that is our image capture code now done so let's go on ahead and test this out so we might do a test run and if it works then we should be good if it doesn't we'll try it again so let's run this and ideally as we're collecting images we should see this folder start filling up so we're going to collect images for awake first and then for drowsy so i'm going to move this over to the side so we can check make sure we're getting images so we don't have a pop-up yet should come soon okay so you can see it's collecting images so awake so i'm actually going to quit out of this we're still gone okay so rather than collecting images with a green screen background i want to drop the green screen just so we've got a little bit of noise in the background as well for our image collection so let's go on ahead and drop down the green screen and then so first up we're going to collect images for me being awake so eyes wide open looking up and around and then drowsy so sort of like eyes closed head down that type of thing so let's go ahead and do this so i'm going to move the microwave drop the green screen so that's not there and then we're going to run this cell and eventually you're going to see all these images being captured so let's go ahead and do this so we don't have our pop up yet should come up any second now there we go so wide awake just moving my head around eyes wide open i won't exaggerate it too much and again you could do this at a whole bunch of different angles are we up to drowsy yet i'm still awake 16 so you can see that being printed down the bottom there all right cool now drowsy so eyes closed doing number 11 cool all right cool that's 19 images now collected so let's take a look at what we've got there so we've got our images inside of our folder that one's a little bit of overexposed but you can see i'm looking at waka there and then i'm drowsy so you can see we've got our drowsy images as well and drowsy is pretty clear because my eyes are closed i'm starting to fall asleep that's going to be an indication as to whether or not we're drowsy right okay now what do we need to do next so let's close this so what we now need to do is we actually need to go ahead and label these images now in order to do that we're going to use the label image package so if i show you that again i'll have a link to this in the description below so but in order to access it you can go to github.com i can never pronounce this but it's t-z-u-t-a-l-i-n and then forward slash label image so this repository over here all right reasonably easy to get up and working with and it works pretty well so what we're going to go ahead and do is use this so what we need is this link here so i'm going to copy that i'm just trying to look over my mic to my keyboard so let's go ahead copy that and we are going to clone this so git clone should probably add another section for this that's fine so git clone paste that in and this is going to clone into our current working repository so let's go and do that and effectively what you should start seeing is that we will now have a folder called label image so you can see that that's already been created there and we're going to wait for that to clone down and then we'll go on ahead and install it in a sec okay that is label image now cloned now if we go and take a look you can see that we've got that clone there let's zoom in and you can see we've got a bunch of oh where are we going you can see we've got a bunch of stuff available in there right but right now as of yet it's not fully installed so we actually need to go ahead and install this so let me walk you through that there's really just two commands but basically the first one involves this doc qrc file so if you don't have that in there just clone it down again and then the actually the first command is we need to install a couple of dependencies then the second one is just going to be with pre-processing this resources.qrc file so let's go ahead and write the two commands to do that okay so before i go on ahead and run that let's actually take a look at what we've written so the first line is actually installing some dependencies and there's two key dependencies here pi qt5 which i believe is a gui library and then lxml i believe is a dependency of pi q t5 so the first command is exclamation mark pip install pi qt5 and then lxml and then dash dash upgrade and the second command is exclamation mark cd and so this second command is actually working with that resources.qrc file that i mentioned so what we're doing is we're effectively seeding into that label image file because remember we're in the top file repository at the moment so we'll go into that and then what we're going to do so end end we're going to run pi rcc5 dash o and then we're going to pass through libs forward slash resources dot pi and then resources dot qrc this is just part of the standard installation code for label image so if you actually go i'll show you where it is so i'm on a windows machine so uh you can see these commands here all right so i'm using pi rcc5 that's fine you can use whichever one you want but if you were using a mac for example you'd be using these commands over here right so it's slightly different if you're using linux it's going to be this over here cool so what when let's actually go ahead and run these so we haven't run them yet cool so that's now gone and done now what we can actually do is go ahead and open up label image so i'm going to minimize this for now and what i first want to do is inside of our data folder i want to create another folder so we're going to create another folder called labels so we should now have two folders inside of our data folder images which contains all of our images and then labels which at the moment is blank then what we're effectively going to do is we're going to point label image to our images and we're going to so that we're actually going to be able to label those images and then we're going to point the output to this labels file repository but that's fine we are going to do it in a set so what do we need to do we need to start up label image now so let's go ahead and do that so i'm going to open up a new command prompt and i'm going to go into my d drive cd youtube i'm just going to make sure my oh where's my yolo folder it's yolo v5 so 29 and i'm going to activate my environment because i'm currently using one that's purely optional if you've done it so i'm now operating inside of the same kernel so i can actually just clear this so i'm now operating inside of the same kernel where i've already installed my dependencies and where i've got everything set up so what i need now i need to do is actually go into key point to note is that if you're not working with a virtual environment and you're just installing into your local pc you don't need to do any of what i just did so as in activating my environment you can just use the base environment inside of your computer but if you get stuck on this hit me up in the comments below i'm more than happy to help you out so now what we're going to go ahead and do is go into our label image folder so cd label image and then what we can do is actually start up label image i believe it's python and then label image dot pi and you can see this is label image now opened up now the cool thing about label image is that it makes it relatively easy to go on about labeling your images for object detection so what we first up need to do is open up our images directory so we can over here just hit open dr and i'm going to point that to where we've got our images that we collected so right now it's inside of data and then images and then this is it's not going to show you any images in here but this is where all of our images are so if i select that folder you can see all of our images are in there and you can actually browse through these just using the wasd keys and you can see we've got them all there right cool all well and good now what we first up want to do is what we secondly want to do is change our save directory so right now it would be in the same folder so we're going to choose change save directory and we are going it's already pointed there so we are going to go into our top level folder go into data and then go into labels so we're going to save it all inside of this labels folder and then what we can go ahead and do is start labeling our images so remember we're going to have two different classes so drowsy and awake so these this particular image even though it's a little bit overexposed is an image of me awake then the next ones as we go through them we're going to do drowsy so let's go ahead and do this now in order to label an image you need to just hit q on your keyboard and it's going to allow you to label so all you then need to do is go to the uppermost so we're going to label our head right so i'm going to do that and i'm going to type in this is me awake hit ok oh another key thing to know is that when you are actually labeling this because we are using a yolo model what we need to do over down here is choose the yolo format so there's a bunch of different formats that you can save this as so we want to save it as yolo so there's create ml pascal voc we want it saved as yolo right really really important so this is going to change the regular output format that you'd get from label image to a format that yolo can work with so just make sure you've got it saved as yolo or make sure you've got it set as yolo and you should be good to go and then what we're going to do is save this annotation and let's just make sure that we're actually saving these so if we go into data and labels you can see that we've got our classes output and we've also got our different annotations and what you actually get inside of this so let me actually take a look does this have all of our classes so one two three four five six seven eight nine ten eleven twelve thirteen fourteen fifteen sixteen wait oh it'll start at zero that's cool so what we're gonna get out of this is an annotation in this format so this is the yolo format a little bit lightweight so it's going to give you the class number and then it's going to give you the what is it the y x coordinates i can't remember the order but it's basically giving you the different coordinates for your image and these are normalized coordinates as well key thing to remember and then you're also going to get this document called classes really really important that you hot keep that folder in there as well so these are all the different classes that yolo by or label image by default is going to have associated that's fine don't worry about the fact that we've got those other ones in there it's still going to work perfectly as long as that you've got awake down the bottom and eventually drowsy so what do we do we hit w we labeled this image now we can go to the next one hit w again label that one and it's going to be awake as well save it just double check that we've got our second one saved cool so we've got both of them in there now all right let's keep going through so i'm just going to keep hitting w labeling them oh this one's blurry this would be interesting all right we're going to speed through this so i'm going to power through this and label all of the images for awake and then we'll be back in a sec once we're ready to go to drowzee okay so we're now up to the drowsy images so we're going to again as per usual hit w on our keyboard to open up our crosshairs and what we're going to do is again select our head on my head and this time i'm going to change the class so we're going to change this to drowzee so you can see that down there so rather than labeling it as awake we're going to label it as drowsy and hit okay save and let's just go and double check our classes so you can see that we've got all of these and inside of our classes.text document you're now going to have the drowzee class as well you can see we've got drowzee as well inside of our classes text files or inside of our annotations down here so we are now good to go let's keep powering through so we're going to keep labeling these ones drowsy now so again i'm going to hit w make sure it's drowsy save drowsy save all right i'm just gonna power through these and we'll be back once we're done [Music] okay that is all of our images now labeled so again if we go through them you can see that they've all got classes so again you can use a and d to go left and back and actually go through all of these images so you can see that they're all labeled these ones are all drowsy as you can see up here these ones are all awake which you can see up there okay that is and so now that that's done you can actually just close this down so we don't need label image anymore we are good we can close down this command prompt and we're good to go now so if we go and take a look at our data so we've got all of our images let me just hide this preview pane so we've got all of our images there and we've also got all of our labels there as well so that is or inside of our labels folder so we've got all of these annotations now good to go now all that's left to do really is run our training command so let's actually go on ahead i'm going to write this out and i'm going to walk you through it so let's do it okay so that is our training command now written now there's one additional thing that we need to do before we actually go and run this but let me actually walk you through this first up so what i've written is exclamation mark cd yolo v5 so remember we cloned our yolo v5 repository right at the start so right up here so what we're first up doing is we're seeding into that and then we're going to so by passing through and n we're going to run the python train command or the python train dot file so let me show you this so if we go into youtube and then yellow v5 so what we're going to do is we're effectively going let me zoom in yolo v5 and then train.python so we're going to be using that over there then what i've gone and written is dash dash image 320 so this is going to be the size of the image that we actually go and train on i've passed through the batch size we're going to pass through a batch size of 16. so again you might need to adjust this depending on the size or performance of your gpu then i'm going to pass through the number of epochs that we want to train which in this case is going to be five different epochs and then this is where we've got to do a little bit of additional work so i've gone and passed through dataset.yaml so dash dash data is then of password dataset.yaml so this effectively gives us our configuration for our training run so as of right now we don't actually have this setup or i've got it set up from a previous run but we're going to do it from scratch and then the last thing that we need to do is pass through the weight so what model that we actually want to train on and in this case we're going to pass through yolo v5s.pt and this is actually going to download from the main repository so you don't actually need to have these pi torch weights actually pre-downloaded it's going to do it for you but let's actually go ahead and create this dataset.yaml so i'm going to delete the one that i've already got in there so you can see i've got one here and what we're going to go ahead and do is create a new dataset.yaml file so let's do this so i'm going to open up a code editor so i'm just going to use vs code for this and i'm going to create a new folder and we're going to save it as dataset.yaml inside of that yolo v5 repository so make sure that when you go and save this it's inside that yolo v5 repository so we're going to call it dataset.yaml and it should be a yaml file save that all right so at the moment we don't actually have anything in this so let's actually take a look at what we need to include in this file so this is actually from the official ultralitics repository so if you actually go into let me show you where this is it's again yolo v5 so if you go right down to the bottom where it says training if you go to tutorial this is actually going to give you information about training but we actually want to go to the train custom data bit and we need to create this over here so you can see that it says create dataset.yaml so what we're going to do is copy this paste that in there and we basically just need to specify these different components so let's go ahead and fill this out so there's a couple of things that we need to fill so namely we need to specify the path or this is our root path then we need to specify our training path our vowel path and our test path so i'll explain this in a second so our path is our root directory path where our data resides so in our particular case it's going to be dash dash or dot dot forward slash data and then we need to specify where our images are now our images are just inside of a folder called images so we can actually just specify it like this we can get rid of these comments as well so we don't need this we don't need this and we don't need this and we can get rid of this cool so our top level folder is going to be dot dot data so this is going to go outside of our main folder into our data folder and then our images or our training images are inside of a folder called images and we're just going to use the same images for our validation as well so again it's not optimal you'd probably cr capture some more images if you wanted to in this case should be fine and then we've gone and specified our val or where our validation images are in this particular case we've gone and specified that it's the same folder as our train folder and then we can get rid of this test component we don't need that and then in terms of our number of classes so nc represents the number of classes that we want to capture we basically just need to represent all of the different classes that we've got within let me find where we are so if we go into youtube all of the different classes that we've actually got within that classes file within our labels so if i go into data and then labels we just need all of the number of classes that we've got in here so if i copy that so you can see we have 1 to 17 so we've got 17 different classes so i'm going to copy this i'm going to type in 17 here and what we actually need to do is we need to actually paste in all of our different classes that we've got here so we can actually delete all of this and paste in our class names and we're going to convert this into a series of strings it's probably a faster way to do this but i am not the greatest at vs code so what we're going to do is convert all of these strings and add a comma between them cool so now we've got our class names now made available now i'm just going to bring it all into one line okay so that is our dataset.yaml file now set up so basically all we've gone and done is we've gone and specified our root directory for our data so dot dot forward slash data and this is going to be where it looks for your labels as well so your labels folder needs to be inside of that top data directory as well so again remember we've got it set up correctly so if we go in we've got data so this is going to be our root photo we've got our images here and our labels here so we are good to go then we've gone and specified where our training images are where our val images are so we just need to specify images let's actually make this a bit bigger so we can see and then we've gone and passed through nc which represents the number of our classes or how many classes we've got and then we've gone and specified all the different labels that we've got inside of the same order that we got from that classes.txt file that's generated by label image and i've just gone and put it all on one line as well and so we've wrapped those inside of quotes so they're all strings and we've added a comma between each one of those so if we actually save that now and close that down let's actually take a look at that dataset.yaml so again this needs to be inside of the yolo v5 folder so you can see it over here so data set and then to yaml file so if we open that up again this is all the stuff that we just went and did so that's pretty much it when it comes to setting up your configuration for training now if we go back into our notebook let me open this up we can effectively now go and run this line and this should kick off our training so let's run this and see what happens looks like we've got an error there and it's saying it hasn't found our yaml file it should be in there what have we done please oh so right now it's a dot yml file that's sounding promising i can hear my gpu spinning up nope we got an error again what's happened now okay so it looks like we've got another error there and you can see here that it's saying broken pipe error error no 32 broken pipe so in the github repo you can actually see that there is some resolution for this and the way to solve it is to drop the number of workers that pi torch is using at that particular point in time so we can actually solve this by adding dash dash workers and then setting that equal to either one or zero one or two so ideally this drops the number of compute capacity that you're utilizing for this runner training so what we're going to try is let's try two to begin with and sometimes you might need to restart your kernel with this in this case let's try two and see how we go and we'll also increase our number of epochs so i was sort of just testing out five to begin with but what we ideally want to do is test out for let's say we'll do 500 maybe we'll stop early and test it out from there but ideally you want to train for more than five epochs because it's going to take a little bit longer to get a reasonable model so if we go and kick this off you can actually monitor your training inside of the yolo v5 package so if we go into same folder that we're currently operating out of so yolo v5 and then inside of our runs folder and inside of train you're going to have all of these experiments so we've currently kicked off our run which is the latest run so it's going to be exp 15 and you can see here that we're already starting to get some results for our model so you can see that we've got our initial batch test results we've also got batch 1 and batch 2. so the fact that it's starting to generate these files indicates that our model is training successfully but the most important thing is if you actually go into your weights folder you're gonna have these batch or best dot pi torch weights available so this means that our model is now training successfully and will effectively be able to leverage this you also get a whole bunch of additional information about your label so you can see that all of our images are in the awake and drowsy category we'll also have a labels a i always get have a tough time pronouncing that but you've also got this so there's a whole bunch of data that's already generating now we're going to let our model train so we'll probably give it around about 10 minutes and then we'll be back and we'll be able to test it out alrighty so that is our model finish training so i gave it around about seven ish minutes to train so you can see so we've finished training so you can't see because i haven't shown it to you yet but you can see in here that we've got our best.pt file and our last.pt file so if you go into your root folder and go into yolo v5 and then runs and then train and remember our last experiment was exp15 so you can see that we've got a whole bunch of information here we've got our results folder and what epoch did we get up to so we got up to about epoch 411 down there looks like our model was performing pretty well so again so you've got a whole bunch of metrics about i believe this column is accuracy don't quote me on that but i believe that's the column we need to be looking at so again you can see it's performing reasonably well but the proof is in the pudding so we actually want to go ahead and test this out if you want more information as to what's actually contained inside of this results file do hit me up in the comments below i'll see if i can get some information on that for you but for now let's actually go ahead and test out our model so we stopped it at about what was it so we stopped it at about epoch 411 now what we actually want to go ahead and do is load this model now i actually had to stop this notebook when it was sort of debugging so what we need to do is make sure that we bring in our dependencies again so i'm just going to bring in this torch section and we are going to bring in this section over here which includes uuid os and time so let's bring those in and then what we're now going to do is load up our custom model so remember inside of that folder inside of weights we've got best.pt and last.pt so we're going to try using the last stop pi torch weights to be able to load up our model so let's go ahead and load this up and we'll test it out okay so that is our model now loaded so in order to do that i've written model equals torch dot hub dot load and then through that will pass through a couple of parameters now i had to pass through force reload to make sure that we download the latest version of our model so we're going to test this out and see if it actually works so torch.hub.load and then we've passed through ultralytics forward slash yolo v5 so exactly the same as what we were doing up here and then we've rather than passing through yolo v5s which was the small version we're now passing through custom because we want to load our custom file and then all we need to do is point to that custom file so yolo v5 forward slash runs forward slash train forward slash exp15 so that was experiment 15 then we're going to point to our weights and then forward slash last dot pt so this is going to load our last set of weights and then i've passed force underscore reload equals true so i actually got an error when i tried to reload it wasn't able to find our last cache files so that is the way to go around that now what we can actually do is test out this model so what we'll do is we'll actually pass through one of our images and see if it works so let's give this a crack okay so that's the path to one of our images so i've just written image equals os.path dot join and then i've passed through the path to that image so data images and then the name of the image so i've just gone and grabbed the file path to one of our images and let's test this out so we can then pass that image to our model so results equals model and then pass through our image then if we take a look at results and type in dot print so you can see it's detected something so one and drowsy so let's actually go ahead and render this so remember we can render using plot.i am show and if we type in mp.squeeze and then pass through our what are we passing through results dot render and we need to pass through matplotlib inline and plot.i am show oop and then plot.show and there you go so it's actually detected drowsy so you can see that there so it's loaded up our image it's detected that particular class and it's detected drowsy let's actually try a different image so if i go and grab one for awake let's pass that through and you can see it's gone and detected awake so initial results are looking promising and we've got a good confidence interval there now what we can actually go ahead and do is copy the same code that we used in step four for real time detections and use this to perform real-time detection of i keep zooming out real-time detection of whether or not we're drowsy or not so if we paste that there this should theoretically work so let's try that and ideally we should get a little pop-up okay is it saying i'm drowsy let's drop the green screen and see how we're looking so this is telling me i'm awake drowsy awake drowsy like drowsy like drowsy so look at that that's actually performing really really good so you can see it's detecting our face so when we're wide awake risi so as we start to lean over it's starting to transition so if i close my eyes so that's awake start to veer off i'm drowsy wait drowsy awake drowsy so you can start to see i mean we haven't done too much data pre-processing but it's actually able to detect whether or not we're awake or drowsy so initial phases are working pretty pretty well and we've trained for what 411 epochs so you can see i'm awake and that's it guys you can start to see that these are initial results so you can see as i move my head maybe it's not so great but at least it shows that i'm looking away from the screen eyes wide open and that's giving me awake pretty cool right roussy wake drowsy wake drowsy awake drowsy wait okay that's probably enough testing so you can see that that is now our real-time drowsiness detection model actually working so again this model seems to work reasonably well what do we train on like 20 different images per class and we only train for 411 epochs again you could obviously train it for a whole heap longer add more images at different angles so you can see as i turn my head it's transitioning to drowsy so head straight on is awake and then because we obviously when we were collecting our images we were starting to say that head tilted might be an indication of being drowsy but this is good right so like if you're driving a car ideally you want your head to be straight and you're looking but we might actually we might actually benefit from adding images with our head turned to the a particular angle because that might still mean you're awake you might just be checking over your shoulder as you're merging but again you can start to see the initial phase so awake grazie awake grazie awake drowsy i'm reasonably good confidence intervals as well 0.8 0.9 pretty cool right so that about wraps it up so we've gone through quite a fair bit so let's all go all the way back so again this has been quite the journey so we first up started by installing and importing our dependencies we then went and loaded our initial model from torch hub we then went and made a bunch of detections with our images performed some real-time detections with the baseline model which used coco we then went and collected our images labeled them and trained on that and remember you've got a few different things that you might need to go through when you're actually going on ahead in training but remember if you get stuck with that broken pipe error then just pass through dash dash workers equals two or just par dash dash workers too so that's passing through that particular argument if you get stuck on an error around about so that particular model doesn't exist when you go to reload the custom model just pass force underscore reload equals true and that should help you work through that but in a nutshell that gives you the ability to go on ahead and perform your initial phase of drowsiness detection thanks again for tuning in that about wraps it up thanks so much for tuning in guys hopefully you enjoyed this video if you did be sure to give me a thumbs up hit subscribe and tick that bell so you get notified of when i release future videos and let me know what you thought about this implementation now there's definitely ways that we can potentially improve on our drowsiness detection model maybe extracting our eyes and our mouth to see whether or not we're yawning or whether or not we have our eyes closed but this is just a beginner implementation ideally to be able to leverage the yolo algorithm and specifically the ultra linux yolo v5 package thanks again for tuning in peace
Info
Channel: Nicholas Renotte
Views: 204,719
Rating: undefined out of 5
Keywords: yolo python tensorflow, yolo python opencv, yolo python tutorial, yolo python code, yolo python implementation, yolo python code github, drowsinness, drowsiness detection system, drowsiness detection system using python, drowsiness detection system using raspberry pi
Id: tFNJGim3FXw
Channel Id: undefined
Length: 78min 34sec (4714 seconds)
Published: Mon Jul 12 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.