Build a Deep Facial Recognition App // Part 2 Collecting Data // Deep Learning Project Tutorial

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
what's happening guys welcome to part two in the series on facial recognition using a siamese neural network where we try to implement a code from an existing deep learning research paper and in this particular case we're going to be doing facial recognition or facial verification so ideally we'll be able to use a webcam or a camera to be able to verify ourselves inside of an application let's take a deeper look as to what we'll be going through in this tutorial alrighty so in part two what we're going to be doing is we're going to focus on getting our data now remember from part one what we needed was three different types of data so we needed our negative images we needed our anchor images and we needed our positive images so in order to collect our negative images what we're going to be doing is we're going to be leveraging a standard image repository or facial image repository called labeled faces in the world so we'll be able to download that and unpack it and get it into the structure that we need for our model we're also going to collect our anchor images and our positive images now in this particular case we're going to be using opencv to do that using a webcam but if you've already got some images of yourself say for example you've collected them using your phone you can definitely use those as well ready to do it let's get to it alrighty guys so in this tutorial what we're going to be doing is collecting our data sets from the labeled faces in the wild data set and then we're also going to be collecting our positive and our anchor classes so this basically means that we're going to have all the data that we need to at least go on ahead and train our model now before we get into it i wanted to sort of explain a little bit more as to well how we're going to actually be using these data sets so let's actually take a look at how this is all going to work now as i mentioned before we're going to have a positive data set and we're also going to have a or we're going to have positive examples negative examples and anchor examples so i wanted to sort of visualize how this is actually going to work so let's take a look at our first positive or our positive examples first so say for example we have an input image well that's a little bit thin let's make that a bit bigger so we've got an input image so imagine this is coming from our webcam move this out of the way webcam and then what we're going to have is a positive image so positive when we actually go and build our model what we're going to do is we're going to pass this or pass both of these images to an embedding model or an encoding model is probably a better term and these models that you can see here so let me change the color so it's a little bit better to see so this model is effectively going to be our encoding model so it's going to convert our webcam or our anchor representation so this webcam data is our anchor so we're going to convert that model encoding to a data representation and then when we actually go and build our model what we're actually going to be doing is trying to determine the difference between our anchor and are positive so this layer that we're going to implement over here is actually going to be a distance layer so think of it as going all right so we're going and converting our input images to a embedding or an encoding and then what we're going to do is we're going to try to see how similar they are and if they are very similar then what our model is going to do is it's going to output a one to basically say we are verified which is effectively saying that the person inside of our positive image is the same person inside of our anchor image right so these are all going to be connected now the cool thing about using this particular type of model is that if you wanted to go and implement this on other people then you definitely could all you would need to do is pass through a different positive image and pass through the same anchor image or password different anchor image and it will be able to verify against a whole range of people now in our particular case we're going to be doing it against one person but that's perfectly fine now let's take a look at a negative class right so i'm going to do this one in blue so again actually let's do it in the same color so we're again going to have our anchor image right and this could be from your webcam it could be an image from your phone it's really going to be what we're passing through as our input to effectively perform our verification then we're also going to have a negative example and again we're going to be using this same embedding layer or the same embedding model or encoding model whatever you want to call it and we're going to be passing these images through so these models that you see here or this model is the same across the board i'll actually draw it uh make it a nicer orange that's nice all right so this everything that you see here that is going to be the same model so effectively what that model is going to learn how to do is how to best represent the input images to be able to ensure that when we actually go and perform our similarity analysis that we're actually accurately classifying them as either positive or they match or they don't match as in negative so when we go and pass through our anchor and our negative what's actually going to happen is when we pass this through to our distance layer our distance layer is going to say hey not the same the same and it's going to output a zero so which means that we are unverified so i figured i'd give a little bit of a visual representation of what we're building because sometimes i think it's very easy to get lost as to how these neural network models work but basically this is in a nutshell how this neural network is going to be built up so what we're actually going to be doing in this tutorial is we're going to be focused on collecting our data so this one's going to be so we're going to be collecting our anchors which are there we're going to be collecting our positives which are there and we're also going to be collecting our negatives now in our particular case our anchors are going to come from our webcam so we're going to do that using opencv and our positives are going to come from our webcam as well web not wed and our negative data is going to come from the labeled faces in the wild data set so this is an open source data set that actually has a whole bunch of different faces that in a nutshell is what we're going to be doing today we're going to be focused on all of the stuff in i don't know what do you call this color aqua we're going to be doing that so we're going to be collecting that data alrighty so let's actually get back to it and do some coding so first up what we're going to be doing is we're going to be collecting our labeled faces in the wild and then we're going to collect our positive and anchor classes so let's go on ahead and do this now in order to get our labeled faces in the wild data set you can actually go to this link here so let me actually copy this so you can see it so it is http colon forward slash forward slash vis dash www.cs.umass.edu forward slash lfw forward slash so this is actually going i'll actually link to this in the description below so you can pick that up as well so don't stress if you haven't picked it up now as per usual all the code that we write inside of this tutorial is going to be available via github so if you want to pick that up you definitely can and i'm actually structuring the code so you can see what we've written after tutorial one or part one of this series what we've done after part two so you'll actually be able to see progression okay but for now what we need to do is get our labeled faces in the wild data set so if we go to this link what you'll actually see is that right over here we've got this link called download so we're going to hit download and then there's a whole bunch of information here so we've got uh what do we have so we've got all images as gziptar file all images aligned with deep funneling all images aligned with funneling all images aligned with commercial face alignment software so there's a whole bunch but what we're actually going to be doing is we're going to be using this one here so all images as a gzipped tar file so let's hit that and this should start downloading so it's about 170 273 megabytes so once you've got that we'll be able to untie it and start working with it inside of python so let's give that a second to download alrighty so that is our data set now downloaded so what we're going to do is we're going to open that up inside of its folder zoom out a little alright so you can see that that's downloaded so i'm just going to cut that and paste it into the same folder that we're currently working in so i am currently inside of my d drive inside of youtube and inside of our face id folder so i'm just going to paste that there so once you've gone and downloaded it put it or once you've gone and downloaded that data set grab it and put it inside of the same folder as your jupyter notebook is in so if you're doing this in colab just make sure it's in the same folder as your jupyter notebook so you can see that that is our jupyter notebook that we're working on at the moment that is our data set so lfw.tgz cool alrighty so what we now need to do is we now need to uncompress that so it's a tar gz file so you can see it's tgz so we need to uncompress that so we can go on ahead and do that inside of our notebook so i'm just going to add a comment uncompress gz what is it labeled faces in the wild data set all right so we're going to let's do it okay so that is the command to uncompress and extract our data set so i've written exclamation mark tar and then there's a space dash xf and then there's a space and then there's the name of the file so this command here is what's actually going to allow us to extract our data set and put it inside of the same repository or the same place that it currently is this is actually just passing through the actual name of the data set so if say for example in the future the name of the data set from labeled faces in the wild changes you're going to want to change this component here so if we run this now all things holding equal this should uncompress and we should be able to see our data set okay so that's finished running so you can see that we no longer have an asterisk over here so if we actually go and open up our data set again i don't know why i closed the folder that's cool you can see that that's now been uncompressed so we've got this folder here called lfw and we have a ton of images so you can see that it's actually labeled by person's name now we don't we're not actually really concerned with the person's name in this particular case because we're going to be using all of these for negatives but if you wanted to do a different form of facial verifications to say for example you wanted to add triplet loss you could definitely do this and add pairs of images for different people in our particular case we're very much focused on the one person that we want to verify so what we need to do is we need to take all of these images so if you actually open up these folders there's multiple pictures of people right heaps of people list keeps going what we want to do is we want to take these images inside of the labeled faces in the wild folder and inside of these subsequent folders and we want to put it inside of the folders that we created in part one of this tutorial so if you go into the data folder that we created earlier we want to move all of those images from the labeled faces in the wild data set and put it inside of this negative folder so what we're going to do is we're going to write the python code to go on ahead and do that so this is effectively going to put all of our data in the same place and follow the same structure so let's go on ahead and do it so we're going to i'm just going to add a comment so move lfw images to the following repository so it's going to be data and then forward slash negative all right so let's go ahead and write that code okay so that is our code to move our data from the lfw folders and the subdirectories into our negative folder now the key thing that i just realized is that i haven't actually gone and run the code that we had in our initial tutorial so if i go and run this again so let's actually take a look at what we've written first and then we're going to go and run our imports and stuff so i haven't actually gone and run the initial steps my bad that's fine and so if i actually go and run this now we're going to get a whole bunch of errors if i run this you can see it's saying name os is not defined perfectly fine we'll solve that in a second so first up what we're doing is we're looping through every single directory inside of our labeled folders in the wild repository or directory then what we're going to do is we're going to loop through every single file inside of those subdirectories so we're effectively saying go into ow where are we surface id so go into this folder and then go into this folder and loop through each of these images because in some particular cases there's multiple images of people so you can see in that case we've got multiple images so we need to move each image into its new folder so in order to do that we first up define the existing path so i've written x underscore path in caps and i've set that equal to os the path dot join and then we'll pass through lfw which is the main directory or the root directory pass through the directory that we've extracted from here because remember we're looping through them and then pass through the file name that we're getting from here then we're specifying the new path name so i've written new underscore path in all caps and i've set that equal to os.path.join and then we're passing through our negative path which is from our previous tutorial which we defined over here and we're passing through our file name so this is going to effectively join our negative path and our file name and then we're going to use os dot replace and we're going to pass through our existing path and our new path so this is going to grab it from our existing path and move it to our new path but as of right now this isn't going to run because we haven't run our import so i'm going to go right up to 1.2 run this so that's going to import opencv os random numpy matplotlib then we'll import our tense flow dependencies not the i think we'll need them now and then we're just going to run the code under 1.3 and we're going to run the code under 1.4 so that's going to define our different paths so our positive path our negative path and our anchor path okay now if we go and run this all things holding equal let me actually just show you quickly how this actually works so if i write os dot list here lfw so this actually returns all the subsequent folders inside of the lfw directory now if i loop through those so for directory in os.listia what's going to happen is i'm going to be able to access each directory so if i write four file in os dot list dr and this should be os.path.join i've actually written got an error there so os stop let me actually run it without changing it and we'll see what happens so if i run lfw and then directory that's going to throw an error so this should actually be os.path.join because right now i'm not joining those directories together so if i change that here as well os.path dot join and then if i print file this is going to print out every single image so now if i actually join these together so os.path dot join and if we pass through lfw so os.path.join just joins directory names together so it gives us a full path so if i pass through lfw and then directory and then file and then close that this is going to give us the full path to every single image right so that is exactly what we're doing to get our existing path then what we're doing is we're defining the new part so we're going to do os.path.join and we are going to be passing through our negative path and our file right so we're effectively going to be grabbing this image and then moving it into data and then negative and then aaron eckhart and so effectively it's the same name so we're grabbing this and moving it to here we're doing this we're moving it here so we're just going to loop through and do this for every single image so if i delete that we don't need that anymore and i actually go on ahead and run this this is actually going to move all of our images from those existing stacked directories into our negative path and that is done so i ran reasonably quickly so if we go into that folder now and go into d drive youtube face id and if we go into let me zoom in on this data and then negative you can see we've got all of our negative images there pretty good right now i don't think we're going to use all these images for training but you sort of get the idea we've got plenty to work with if we need to so what's happening now so we can actually close this so what we just went and did there so if we go into our lfw folder and you can see that each one of these are now empty because we've actually gone and moved them into our new folder so what we can actually do is delete this lfw folder there's nothing in it we can get rid of it now so if we delete that we are all good cool so that is step 2.1 now done so we've gone and uncompressed our label faces in the wild data set and we have also gone and moved all of those images from the lfw directories in to our negative path which we defined up here from the first episode in this series cool so what's next is that we need to actually go and collect our positive and our anchor classes now for this what we're going to be doing is we're going to be using opencv to access our webcam and we're going to collect those images down and save them down now the size of the images that we're actually going to be collecting are going to be 250 pixels by 250 pixels so by default when we use our webcam your image resolution might be a little bit different so what we actually want to do is we want to ensure that we're collecting images of that size because i believe the sizes from the lfw data set are going to be in that same size as well so let me just double check that so if we go into data and then negative and if i go and open one of these so properties and details yeah so you can see that 250 by 250 and i can check another and again 250 by 250. so to make our lives a little bit easier we're just going to ensure that we collect the same or images of the exact same dimensions when we collect our anchor and our positives this is going to make your data processing a whole bunch easier when it comes to training the model all right cool so that is good so what we now need to do is do exactly that so we're going to be using a pretty standard opencv loop to be able to go and collect this with a few tweaks to make our lives a little bit easier but first of what we need to do is ensure we can access our webcam and do that successfully so let's go ahead and do that okay so that is the first part of our video capture code or image capture code now done so i've gone and written one two three four five six seven eight different lines of code there so this is if you've watched any of my computer vision tutorials this is going to look super familiar to you so let's actually comment through this so first up what we do is we establish a connection to the webcam and that is exactly what this line is doing here so i've written cap equals cv2 dot video capture and then i've passed through a video capture device now i think it's going to be video capture device 3 but it might be a little bit different because i've gone and installed some new stuff on my pc but we'll see and then i've ridden so i've actually i think it's actually because i'm actually doing the whiteboarding now but and let me know what you guys thought of that i'm testing that out we'll see if it picks up or if you guys enjoy it if you don't let me know and i'll stop doing it so once we've established our connection to the webcam i've then written while cap is open so this is going to loop through every single frame in our webcam and then we're using cap.read to actually read that capture at a point in time and then what we do is we actually unpack the results that we get from that method there so we unpack it and get a return value and the actual frame so this frame is the actual image then what we're doing is we're rendering that image back to the screen so it just makes it a little bit easier to actually see what we're doing so we've written cb2 dot i'm show so let me add some comments here so show image back to screen so cv2 dot i am show and then we're naming what we want our frame to be named so in this case every image collection can name it whatever you want and then we've gone and passed through our frame which is what we got from over here so this is effectively going to be showing the feed from our webcam on our screen inside of python or inside of a cv2 frame and then everything from here on out is to do with breaking gracefully so what we've written is if dot weight key one so this is going to wait i believe it's one millisecond so cv2.weight key so it says delay what is it this function yes it's in milliseconds so it's going to wait one millisecond and it's also going to check what key we've actually pressed so this is actually unpacking what is being pressed from our keyboard so then we've written end zero xf equals ord q or equals equal so let's check it's doing a comparison check this is really important because we're going to use this a little bit more in a second so when we hit q on our keyboard this should close down our frame what we're also going to do in a second is we're actually going to configure some other ones so that when we hit a it's going to collect an anchor and when we hit p it's going to collect a positive image and i think we're going to collect roughly roughly 300 issue images doesn't matter we'll see i think 300 is probably a good good starting point okay so that is doing that check and then if that check is passed so if it waits a millisecond and we hit q on our keyboard then it is going to break out of this loop up here and then it's actually going to release our webcam so let's actually comment this for once uh release the webcam and then it's going to destroy or close the image show frame so if ever you are using opencv and you're accessing your webcam and stuff is just freezing up it's not working what you can actually do is run cap.release to release your webcam and then cv2.destroyorwindows to actually close everything down and re-kick things off so if this number up here is incorrect and it all locks up and freezes up what we'll do is we'll stop this cell from running and then we'll run these two commands down here to be able to release whatever webcam we're trying to access and then destroy all those windows so that is uh image collect well that is our webcam code now set up we haven't actually done any image collection yet so if we go and test this out let's see if that works so if that does run successfully you'll get a little pop-up right so that has not run successfully so this error is a common error that i always get asked about so error opencv 4.5.3 blah blah blah blah source dot empty in function so this basically means this here super important let me zoom in on that that basically means that it's not able to access the webcam so whatever we're getting back from that webcam device or that image device is empty which means that we don't have the right webcam number up here so what we want to try to do is try a different uh webcam number so let's try four that might work a little bit better all right so that's worked so you can see that i've got this little pop-up and you can see our feed all right so sometimes you're going to have to tweak that right so and this is it's great that i'm showing you this because it'll actually show you how to resolve that error so all we did there is we change our video capture device from three to four and you can see that i've now got the right video capture device so i can see myself in the screen and you can see up there that it says image collection so that image collection label is coming from over here so if we wanted to change that you definitely could okay so the key thing now is that this video frame that we're currently looking at is not in the dimensions of 250 by 250 pixels so if i quit out of this and remember if we hit q so this section of code is going to trigger so let me actually show you this so if i hit q on my keyboard you can see it shuts it down what was that going to say now next okay so the frame so frame dimension so if we actually take a look at frame so the nice thing about this loop is that you're going to be able to access the last set of variables that are being captured so if we actually take a look at frame this is our image and if i run this was the advantage of using matplotlib so if i type plot.i am show you can see that that's our image there ignore the coloring that's because opencv has a slightly different channel order perfectly fine key thing we're interested in is that if i type in frame dot shape this is not 250 by 250 pixels right now it's 480 by 640 pixels by three channels so we actually need this to be 250 by 250 by three so what we can actually do is we can actually do a little bit of indexing or slicing to actually get the right shape so say for example i just did indexing so say i'll grab the first 250 by the first 250 by everything you can see that we are now getting 250 by 250 by 3. and so all i've done there is just some slicing or some array slicing so if i actually go and plot this now you can see i'm getting the top what is that top right corner top left corner so in this case i'm getting the top left hand corner but you can see that that's not actually accurately capturing my face so it's a little bit of a pain there because this is going to suck when it comes to performing facial verification so what we could actually do is we could actually tweak these numbers so rather than starting from zero which is effectively what this code is doing let's say we started a little bit further in so we started from what's a good what did i actually end up using so 120 and then we did 120 plus 250 and then for our x-axis we did 200 and then 200 plus 250. so if we go and do that now that is a little bit better so we're now at least getting somewhere closer to where our face actually is now when we actually go and collect our images we can effectively render that to the screen so we can see where we are inside of that position that's perfectly fine so let me actually explain what i've done there so what i've actually gone and done is some indexing or some slicing so this first value is going to tell us or it's actually going to determine where our pixels start on the y-axis and where they end so we're effectively starting at 120 pixels and we're by passing through our colon we're saying we're going to go from 120 pixels to 120 plus 250 pixels so this is basically specifying the range of values that we want from our image and we're doing exactly the same on the x-axis except here we're starting at 200 pixels and we're going to 200 to 200 plus 250 so it's starting at 200 and going to 450 pixels for our y-axis we're starting at 120 and we are going to what is that 370 pixels and then by passing through another comma and a colon we're effectively saying that we want all three channels if i pass through just uh zero we're only going to get one color channel if i pass through one we're going to get a different color channel and if i pass through two again we're going to get a different color channel there but if i pass through colon we're going to grab all three so it means we're going to retain the fact that we have a color image okay so what we now need to do is we now need to implement this logic inside of our image capture loop so what we had from up here so let's go ahead and do this so we're effectively going to just grab do effectively this and paste this here so we're going to be taking that slice of our frame from our webcam and we're going to reset the variable so we're going to set frame equal to this sliced version so cut down frame to 250 by 250 pixels so now if we go and run our image capture loop we're only going to be grabbing 250 by 250 pixels so if i go and run that you can see that that's effectively doing this so when when we actually go and collect images i'll probably bring my seat a little bit further down and ensure my head is actually inside of that frame but you can see that that is going to be replicating what we've got from our labeled faces in the wild data set a little bit more accurately than if we just went and grabbed 640 by what is that 480x640 cool all right so that's effectively giving us our 250 by 250 pixels what do we need to do now so we actually now need to write out some images or actually save some images so again i'm going to hit q on my keyboard and close that down okay so what do we need to do so we now need to collect our anchors and positive so i'm just going to add two additional comments collect anchors collect positives and so we're going to use this logic that we had down here so our breaking logic to actually go and collect our anchors and our positives so i'm actually going to copy this and paste it here and paste it here and what we want to do is we want to trigger our anchor collection when we hit a on our keyboard so i'm actually going to change the value inside of our ord function from q to a so this basically means that it's going to wait a millisecond and if we actually hit a within that millisecond it's going to collect an anchor image and we're going to do the same for our positives but rather than hitting q we're going to hit p so when we hit a on our keyboard it's going to collect an anchor and when we hit p on our keyboard it's going to collect a positive image now by stacking these together it does mean that there is going to be a little bit of a lag but instead of implementing a ton of logic i just figured this should be fine we can effectively work around this okay so what are we doing now so we're going to collect our anchors and our positives so what we now need to do is we now need to implement some logic to actually grab our frame and save it to our positives and anchor folders now before we actually do that i'm actually going to import a library called uuid so the uuid library is actually going to make it a little bit easier to actually go on ahead and let's actually ensure that we don't have a screw function so i'm just going to write past there and pass there for now until we actually go and implement that logic so we're going to grab uuid and this is going to ensure that we're able to create unique names for each one of our images so in order to import uuids we're going to import the uuid library to generate unique image names uh and so import uuid cool so that's your uid now imported so i've written import uuid and uuid stands for uniform unique identifier or unique uniform something like that so what is it uid question mark question mark nope lowercase oh it's universally unique identifiers there you go so basically gives you a specific pattern to generate a unique identifier now in order to use it there's a few different methods if i type in uuid dot there are a bunch of different formats that you can have so it can be uuid 1 uid3 uuid4 or uuid5 so we're just going to use uuid one so if i type in uuid one you can see that it is generating this unique identifier there so we are effectively going to be using that to generate our unique image name so we're just going to implement that there so let's go ahead and do that and we'll take a step back and take a look at what we wrote okay so that is oh we haven't actually finished that's the cv2 dot i'm right let's finish that okay that is us done so what we've effectively gone and done is we've added two additional lines of code there so i've written first up what we're actually doing is we're creating the unique name create the unique file path and then we're actually going on ahead and writing out our image right out visit anchor image okay so let's take a look at what we wrote so first up i've created a variable called image name so img name and i've set that equal to os.path.join and then we've gone and passed through our anchor path because we're going to store our anchor images inside of our anchor path and then we're just creating a unique name or a unique name for our file so if i go and copy that you can see that we're just appending a unique identifier to dot jpg so this effectively means that when we go and do this multiple times we're going to be creating unique identifiers each time is that actually yeah it is changing so you can see that there and then by wrapping it in os.path.join we're effectively going to be creating a full power file path so ossetpath.join if i pass through a anc path comma and then this unique file name this is effectively going to be storing our images inside of the data folder inside of the anchor folder and then it's going to be naming it that there and in order to actually go and write out our image i've written cv2 dot i am right and we're passing through our image name which is what we just created up here and we're passing through our cut down 250 by 250 pixel frame cool so that is that now done now what we can also do is just copy this over here and paste it under our positives and all we need to do is change the anchor path to the positive path because we're going to be storing them inside of different folders so that is effectively that there now done okay i think we're good so if we actually go and run this we should be able to hit a on our keyboard to collect anchor images and pee on our keyboard to collect positive images so if i go and let's actually do this side by side so we can actually see our folders so if i go into d drive and then youtube and then face id and then data and then anchor so we're first i'm going to collect our anchor images so if i actually go and run this code here now yep okay so that's let's run it cool all right so we've got our image of ourself and what i'm going to do is i'm going to put down the green screen let's actually test this out so i'm just going to get my head inside the frame move the mic out if you can still hear me fingers crossed all right and so what we're going to do now is we're going to hit a on our keyboard to collect an anchor so if i hit a make sure we're clicked into it all right so you can see that that is collecting our images pretty cool right so if we keep hitting a we should be able to collect a bunch more i'm just going to move my head around i'm just holding down a all right that looks like we've got 400 images there that should be more than enough so you can see 414 down there so we've got plenty just take a look at a different view all right so we've got a ton of images let's get somewhere we're a little closer as well so if i go and a little bit closer cool all right so that's 489 images that we've collected now what i'm going to do is i'm just going to jump into the positive folder and let's collect some positives again hit p on our keyboard this is going to start collecting positive images i'm just going to move my head around as i'm collecting now it is going to take a little bit longer to collect our positive images because we've got that one millisecond break probably speed this up in editing but that's pretty cool so you can see that we are effectively collecting images so we want around about 300 for each that's what we'll use for training all righty that should be enough so we've got what 332 positives now collected okay so what we've gone and done there is if we go into our data folder into our anchor we've got a bunch of anchor images collected and remember we're going to have two streams when it comes to building our model we're going to have the anchor image that we pass through and we're also going to have the positive image or the negative image down here so we're effectively going to be verifying whether or not our anchor images matches the negative or matches the positive so it should output a one if it is positive so if it matches the positive and it should output a zero if our anchor is being verified against a negative image okay so we've got our anchor images we've got our positive images and again these have been both collected via our webcam and if we go we've got our negative images as well pretty cool right so if we go and hit q now on our keyboard should hopefully close there we go all right so that is our data all collected so we've gone and done a bunch of stuff so that's effectively this entire tutorial now done so or at least part two so what we've gone and done is we've first up went and collected our images from the labeled faces in the wild data set and again i'll link to that in the description below we moved those into the negative folder and then we went and collected a bunch of images using our webcam and we collected both our anchor and our positive so this is actually doing our positive cool so that is now done so again as per usual all this code's going to be available inside of the description below by github but on that note that about wraps it up i'll see you in the next one thanks so much for tuning in guys hopefully enjoyed this video if you did be sure to give it a big thumbs up hit subscribe and tick that bell and if you have any questions comments or queries do let me know in the comments below thanks again for tuning in peace
Info
Channel: Nicholas Renotte
Views: 4,210
Rating: undefined out of 5
Keywords: face recognition, face detection, face recognition python, machine learning projects, face recognition tutorial, face recognition app, deep learning, machine learning, deep learning project
Id: UMjW4Db4E_g
Channel Id: undefined
Length: 43min 32sec (2612 seconds)
Published: Sat Sep 11 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.