Live Face Recognition in Python

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what is going on guys welcome back in this video today we're going to learn how to do live face recognition using python so let us get right into it [Music] all right so the goal of this video today is to build a python application that recognizes your face or in my case my face in real time so we want to see a camera window showing whatever the camera is seeing at the moment and when we hold our faces into the camera we want the system the application to recognize that it's us and if someone else holds their face into the camera we want to see no match so we want to see a match when it's us and we want to see no match if it's someone else or if there is no face that's the basic uh idea of this program today and of course then you can use that or try to use that for authentication purposes probably not too intelligent or for some notification system that recognizes people and then triggers a notification or triggers a certain function whatever uh whatever you want to do with that is up to you but this is what we're going to build today a live face recognition system in Python and for this we're going to need two things up front first of all we're going to have to install two external python packages and second of all you're going to need a reference image you need an image that is basically showing you that you can use to do a matching because of course you're going to get a face from the current camera image and you want to compare it to a reference image and see if that's the same person so in my case I recorded one before the video this is my reference image here you can just take an image with your camera and then you can use the program to match yourself with that image so I have this here stored in the same directory as my python file with the name reference.jpac so for the libraries what we're going to do or for the packages what we're going to do is we're going to open up the command line and we're going to type pip install and we're going to install opencv Dash Python and we're also going to install deep face so opencv python is going to be used for the camera so for interacting with the camera processing the camera frames and deep face will be the machine learning model the Deep neural network behind the scenes and we're going to start in our code by importing the core python module threading and then we're going to also import CV2 which is opencv and from Deep phase we're going to import deep face in title case or Pasco case probably um so those are the Imports now what we want to do first is we want to have this basic opencv camera structure so we want to define a video capture we wanted to find a camera we want to set the uh the property so we want to set the width to height and then we want to go through this endless loop until we terminate it that just gets the frame and does something with it so what we want to do first we want to find a capture object being equal to CV2 dot video capture and here we need to specify the first parameter which indicates which camera you want to use so if you only have one camera you have to pass zero here because video capture zero picks the first camera um and if you have multiple cameras you can try zero one two depending on how many cameras you have to pick one of the other cameras we're also going to provide to your CV2 dot cap underscore D show and then we're going to set the properties the dimension so we're going to say cap dot set CV2 dot cap underscore prop underscore frame width and we're going to set this to 640 and the height is going to be set to 480 so we're going to keep it small and simple here and then what we want to do is we want to have in addition to that Loop we want to have some um other variables that we're going to keep track of because of course we don't want to uh to verify the faces we don't want to look for a match every single frame we want to do that once in a while because if you do it all the time first of all matching the faces determining if there is a match or not using this neural network behind the scenes we'll take more than just half a second so we're going to have to wait for the response and we don't want to start a new response with every single frame especially if you have 30 to 60 frames a second you want to do it once in a while and we're going to do that by using a counter variable and when we have a certain amount so we're going to take this module 30 for example or module or something else and then once in a while we're going to check this so counter is going to be zero then we're also going to say if there is a match or not this is going to be stored in a Boolean because this Boolean will be globally accessible and it's going to be determined by something running in a separate threat but it's also going to be used by the main thread so we're going to Define it up here and then we're going to load the reference image so reference image is going to be equal to cb2.mrit reference.jpeck reference.jpg there you go and then we can basically do our main Loop so we're going to say while true uh we're going to get the return value uh and the frame from camera read so capture.read this returns two items the return value so we use the return value to determine if there is something if it did return something and we use the frame to get the actual frame that it returned if there is a return value so what we do is we say if there is a return value do something so we're going to just pass here and we want to do it every uh with every iteration is we want to get a key so I want to say CV2 Dot weightkey one to get basically um to be able to process user input so if I press a key it should be recognized and if this key that I press is equal to the ordinal of Q so if I press Q basically then we're going to break out of the loop and when we're out of the loop we want to say CV2 destroy all windows that's the basic structure and all the magic is going to be happening here inside of this if return value exists block and what I want to do here is first of all let's just go ahead and display the results so or not the result but the actual image so CV2 dot uh or actually doesn't make sense since I'm recording I'm not going to be able to access the camera so I'm going to have to write the full code and then once we're done I'm going to have to turn off this camera that I'm recording with right now to be able to use it in Python so it doesn't make sense to to do any steps in between here so let's go into the actual code directly let's say um the counter has to be and now we can choose a number that works for you let's go with 30 every 30 iterations we're going to do the following thing so if modulo 30 equals zero we're going to do the following thing we're going to try to start a new threat that is going to run a specific function which is going to compare the frame that we have from the camera with the reference image so we're going to say threading dot threat the target function will be something that we don't specify yet that we don't have yet so I'm going to I'm not sure if I can do pass let me just Define the function without writing any of the codes I'm going to say here the function will be called check face and it's going to take a frame as a parameter I'm going to pass for now so this function check face will be ran in a new will be running a new thread so I'm going to say check face is the target the arguments are going to be the frame so I'm going to pass to that function a copy of the current frame now what's important here with the arguments is since we're passing a tuple the argument is always a tuple here so what we want to do is we want to add a comma even though we don't have a second argument here we need to add a comma otherwise it's just going to be one element surrounded by parentheses python is not going to turn this into a tuple but RX is a tool so we need to pass it as a tool by adding this comma here so we have the arguments and then we want to just start this thread so we Define the thread we started and this is our try block if for some reason this does not work so if we get a value error which we're going to get quite often because it's not going to recognize a face the problem with deep face is that when it does not recognize a face it doesn't just tell you that it doesn't recognize a face and it returns a no match it tells you uh it doesn't tell you anything it just uh throws a value error so if it doesn't recognize a face in an image it tells you value error and you're not going to be able to do anything so we're going to catch that and we're going to just pass if it doesn't recognize a face it doesn't recognize her face we don't care it's just not a match we're going to also say counter plus equals one every time if there is a return value and then we're going to do the following uh the face match Boolean that we Define up here will be changed by that function depending on whether there is a match or not so we're going to assume that the work here has already been done properly we already we already checked the face so we already have the comparison and the result so what we want to do here is we want to say if there is a face match uh we want to put a text onto the image so I want to say put text onto the frame and the text is going to be match and we're going to pass here I'm not sure what the actual oh it's the spike here so first we're going to say 2450 to define the position then we're going to define the font which is going to be CV2 font underscore Hershey simplex then the scale of the font is going to be two the color is going to be red because we have no match so we're going to say um or actually no sorry this is match so it's going to be green so it's going to be the center value it's going to be RGB we're actually in CV2 it's BGR so blue green red which means 0 255 0 is going to be green so we have 255 for the green value 0 and 0 for red and blue um and then we're going to I don't know actually what the last parameter was does it show that in the function signature it was the thickness there you go the thickness is going to be three so this is what we specify here otherwise if that's not the case we're going to copy that we're going to paste it down below we're going to change this to no match and we're going to change the color to 0 0 2 55. remember BGR not RGB so the r is here we set red to 255 meaning we're adding this red text and of course every time we want to also show the results so CV2 IM show whatever we have on this Frame so the camera image and our detection or not and of course we need to provide a title so let's just call this video and this shows the result now everything's done uh other than the actual checking so the actual machine learning here which is going to be done with deep face so what we want to do here is first of all I want to say that uh face match is a global variable or a global object and we want to say try to do something if it fails if we get a value error here again we're going to just pass and here what we're going to try is we're going to say if deepface dot verify and we're going to verify um the current frame that was passed and a copy of the reference image the reason I pass a copy is I'm not sure if that's even necessary but I think if we don't pass a copy we're actually using the object and I'm not sure if it could lead to some um locking situation for the respective threats because we run a new thread every time so maybe if the actual object is being used we're going to have some problems I'm not sure if that's the case but just for safety I use frame copy and I use reference image copy and then what we want to get you is actually the verified because the verified is going to be either true or false it's going to be [Music] um it's going to be either true or false and if it's true we are just going to set face match equal to true and otherwise actually why is that a problem image path to unfilled oh why did I pass a list here this is how you do it without the square brackets so the face match is going to be true otherwise it's going to be false and that should actually be all theme implementation so let's go go through the code quickly again we Define a camera object we set the proportions the dimensions we initialize the counter being zero we um set face match equal to false we load the reference image we have this function that verifies that checks if the reference image and the current frame have the same face on them um and this happens before we put some text onto it so we get the frame here from the camera without having any text on it we put a copy of the frame uh we we pass a copy of the frame to the check face function um we get the result depending on the result we say there is a match there is no match this changes this variable here and here what we do is we say okay if there is a match we write in green uh color match onto the image and otherwise in red color no match that's basically the idea now I'm going to turn off this camera and we're going to look if this works now before we move on I found two little mistakes first of all I have a typo here it's verified not verified and also we need to set face match equal to false in the case of an exception because obviously if it doesn't recognize a face we should also set face match to false otherwise it's only going to be false if we see a face that's a valid face but it's not us so we're going to set face match equal to false all right now here we are I'm holding something into the camera to prevent it from seeing my face I had to start the application before I could start OBS because otherwise it would recognize my camera uh but this is now the program running you can see no match because there is no phase detect and now if I remove this we can see if it can recognize my face as the face in the reference image so there you go there you go we have a match um if I move into the camera we have a match probably if I turn around enough we have no match anymore if I then look into the camera again we have a match if I put this in front of my face we have no match so this works and I also tried it actually with someone else so I put them in front of the camera and it said no match because it was not my face but of course if I have someone who looks very similar to me maybe some lookalike maybe some brother or something even though I don't have a brother but this would probably not work perfectly so you could definitely fool this system into believing that it's you and of course you can just hold an image of another person into the camera and would not detect that it's not that person but just an image because this is not something that goes into your biology this is just an image detection but it seems to work quite well um of course if I again if I move around or not around if I move away maybe it's not going to recognize me anymore but essentially this works most of the time so this is how you build a live face recognition system in Python so that's it for today's video I hope you enjoyed it and hope you learned something if so let me know by hitting the like button and leaving a comment in the comment section down below and of course don't forget to subscribe to this Channel and hit the notification Bell to not miss a single future video for free other than that thank you much for watching see you in the next video and bye [Music] thank you [Music]
Info
Channel: NeuralNine
Views: 146,543
Rating: undefined out of 5
Keywords: python, machine learning, python machine learning, computer vision, deep learning, neural networks, python opencv, opencv, deepface, live face recognition, face detection, face matching, live face detection, python camera face recognition, opencv face detection, opencv face recognition
Id: pQvkoaevVMk
Channel Id: undefined
Length: 16min 16sec (976 seconds)
Published: Tue Mar 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.