Build a Deep Facial Recognition App // Part 8 - Kivy Computer Vision App with OpenCV and Tensorflow

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
now let's do the almighty robin williams test so this was the one that i was a little bit worried about so if we go and test him drop the screen down so we don't have so much glare so you can see he is unverified the model is working let's try me again verified verified verified alrighty guys welcome to part eight in the facial recognition and facial verification series where we try to take a state-of-the-art research paper and go all the way to the end and finally integrate it into an app and it just so happens that this is the final episode in the series and we're going to be doing exactly that so we're going to be taking our train model and integrating it into a qv app so that we can actually use it practically in real time let's take a deeper look as to what we'll be going through so as i was saying we are in the final episode in the series and this is all going to be to do with actually building up our cubie app so you actually get some practical use out of the hardcore deep learning model that we've actually gone and trained now we're going to be focused on three things as usual following the rule of threes so what we're first i'm going to do is we're going to set up qv so kiwi is an open source framework that allows you to build applications with python what we'll then do is we'll build a lightweight facial verification app so think of like face id for your phone but we're going to be building it using kiwi and then we're going to integrate our tensorflow model so we'll actually build up our verification process to be able to leverage this model inside of kiwi ready to do it let's get to it alrighty guys so we finally got to the good bit and this is where all our hard work finally comes together and where we're really going to be focused on integrating our tensorflow deep learning model in to our qv app now if you haven't heard of qv before basically it's an open source framework that allows you to build applications and i believe you can even build some native stuff all using python which works to our advantage because pretty much everything we've done so far has been in python so in keeping with that theme we're going to be using qivi now specifically in order to go through this we've actually got a number of tasks that we need to go through but before we do that let's actually jump over into our whiteboard session and actually take a look at what we'll be building all right so in this part of the series we are going to be focused on building our verification app so in order to do this we're going to be focused on building our app with qivi now we're really going to have three main components that we need to handle so say for example we've got our app the first thing that we need to be able to do is grab our webcam feed so over here we're going to have our webcam feed and in here we're going to have the person that we're trying to verify from that we're going to have a button which basically says verify and then down the bottom what we're also going to have is we're going to have a label which says verified or unverified so let's say this person is not me then right down here we're going to say unverified now in order to do this we are actually going to be building up this app with a library called qivi and the cool thing about qivi is like it's an open source library for python that allows you to build a whole bunch of different types of apps so if you want to build a like stuff that integrates with your machine learning models this is actually great now we've actually gone and done a ton of additional work specifically so that we can hook in this bit over here to our webcam so we're actually going to have a webcam that integrates into our qb app to actually give us this video feed and that is the core premise of this app so when we actually go and hit that verify button we're actually going to trigger our tensorflow model so this is going to grab the webcam feed and it's going to go and verify against our siamese neural network so imagine i don't know how you draw it let's actually draw a little basic neural network and from that we're going to get either a one or a zero a verified or unverified and that is going to drive this outcome over here and that in a nutshell is what we're going to be building but we're going to be taking this step by step so don't fret we're going to be building it up together all right back to the code and we're back so in order to actually go and do all of that what we're going to need to do is go through all of these 14 different steps now fret not a lot of the stuff that we've already done already is going to be readily transferable to when we actually build this stuff up so even though we've got 14 steps some of them are going to be a lot easier than others now first things first what we need to do is go on ahead and set up our app folder then we're going to install qivi and then bring over some existing assets we're then going to focus on building our layout in our actual app so it's going to be pretty simple but again you could extend this out if you really wanted to and then we're actually going to bring over our verification components and tweak them a little bit so that we can get it to work with our app but first things first let's actually go on ahead and set up our app folder so that we can actually get stuff a little bit more structured so i'm going to go into the same root directory that we've been pretty much doing all of our stuff in and from within there i'm actually going to create a new folder and i'm going to call it app and to begin with let's actually just grab our to-do list and let's actually dump it in there so we've got all of our stuff in the same place and we can see it all together and i'm going to open this folder inside of vs code cool so really it's all we've got is our to-do list inside of there at the moment really nothing too crazy over here now what we actually need to do is we actually need to so we can actually mark this bit off so that's done the next thing that we need to do is actually go ahead and install kivi now this is pretty straightforward it's just a pip install so what we need to do is open up our terminal and before i'm actually going to activate a virtual environment because i'm using one here but if you're not don't fret just install it into your base python environment so let me activate this environment and then we should be good to go all right so you can see my virtual environment is called face id but that's perfectly fine if you don't have a virtual environment don't fret you don't need to go and do that now what we can do is i'm just going to jump back into our app folder let's go ahead and install qivi okay so before i run that let's actually take a look at what we've written so i've written pip install qivi and then inside of square brackets full so this is going to install all of qivi including its dependencies and then we've also installed or we've passed through the second argument which is kiwi underscore example so this is going to install the kivi examples components not mandatory for this tutorial but i think it's i actually had it as part of testing so i've just kept it in there to make sure that we don't break anything but i believe you could actually drop this and it should still run fine okay so once you've written that go on ahead and run it and this should go on ahead and install qivi and you can see that it has in fact gone and done all of that so it looks like we don't have any errors there i've got this warning saying i should upgrade pip but that's fine we don't need to go on ahead and do that let's go ahead and clear this and so to clear in a terminal it's just cls um i can't remember what it is on mac maybe it's clear i can't remember i don't do it too much anymore but what we now want to do is just double check that we've got q there so if i run pip list and if we go and take a look you can see that we have in fact called qivi there so we've got all of our qb dependencies that means we are kind of good to go and we can actually mark step two off as done all right that is that done now and next step is to create a validation folder now what i mean by this is remember in the previous video what we did is we set up some validation images and we when we actually went to perform our verification we didn't just do one prediction we actually looped through all of the images inside of our verification images folder to actually go and get an accurate prediction right that is exactly what we're going to do here so what we need to do is grab those images and bring it into our app folder and that's exactly what we're going to go ahead and do so if we go back into our root folder so remember face id is my root folder so i'm just going to grab this application data folder and i'm actually going to copy it and i'm going to paste it into our app folder and inside of that we've actually got two things so we've got one folder called input images so this is the image that our webcam captures to verify against our two channels and we've also got a bunch of other images which are our verification images so these are the ones that it goes and compares against so that's pretty much that step done so that one was pretty easy so we just went and grabbed this application folder here let me zoom in so you can see it so we grab that and we pasted it into there so application data is now going into our app folder okay so that is step three now done so we've just brought that over and this is really all just a bunch of setup so nothing too crazy here now the next thing that we need to do is create a custom layer module so this is actually relatively straightforward but let's actually take a step by step so i'm just going to close this terminal now and what we're going to do is we're going to create a new file and this is going to be called layers.pi and for this what we're actually going to need to do is we're actually going to need to bring in a custom l1 distance layer so this is actually going to hold custom l1 distance layer module right so what we actually need to do here is we need to bring in our dependencies and then we're actually going to jump back into our jupyter notebook and bring through our custom distance layer and as per usual guys all of the code that i write in this is going to be available in the description below so if you get stuck or you're not sure where to put stuff it's all going to be available there so you can actually pick it up so first up what i'm actually going to do is i'm going to set my python interpreter so if you've got a base interpreter that's perfectly fine again as i'm using a virtual environment i can actually go into the virtual environment so if you haven't seen how to build a virtual environment before i've actually got an example inside of the five hour tensorflow object detection tutorial so i've just created a virtual environment called face id in that tutorial i think i call it tfod when you're inside a vs code you can go and set that kernel inside of scripts and then i'm just going to choose python cool so that basically means that when i'm coding inside of vs code i'm using the same kernel that i'm using to run stuff that i've actually used to develop pretty much the whole app cool so what we now need to do inside of our layers file is we actually need to first up bring in some dependencies and then we're gonna go on ahead and copy over a custom l1 distance layer so let's go ahead and do that okay those are our two main dependencies now imported so i've written two lines of code there so first up import tensorflow as tf so this is going to bring in our base tensorflow library so we can use it across the board and then i've also imported the layer class and this is really important when it comes to actually using or building up a custom layer so for that i've written from tensorflow.keras.layers import layer and layer is in caps just pay attention to that now the next thing that we need to do is bring in the custom l1 distance layer from jupiter so let's go and do that so i'm just going to jump in so i've already got the notebook already set up and it looks like i've gone and killed it off and that's perfectly fine as long as we got it open so i just need to copy over this l1 distance class so i'm going to copy that and this was done in i think episode 4 and it looks like under step 4.2 so we're just going to copy this over and the reason let me actually explain the reasoning so the reasoning that we need this or the reason that we need this is because when we actually go and load our custom model from a h5 file you can see that we need to pass through our custom objects and we need to pass through our l1 distance layer which is our custom object so that's why we're going to need it inside of our qb app so we're going to go on ahead under step 4.2 we're going to copy everything to do with that l1 distance layer and we're going to paste it in to our layers.pi file that is that done so let's just explain why so we need this why do we need this it's needed for to load the custom model and this is going to apply wherever you use custom objects with tensorflow so whether it be a custom layer custom activation function custom optimizer custom loss function you're going to need to do this right so just a key thing to know all right so that is our l1 distance layout now brought in so we've gone and effectively brought that cool so we can actually mark step 4 now as done so so far what we've done is we've set up our app folder installed qiv we've created our validation folder so we brought that in it's called application underscore data and we have also created our custom layer module now again we've got another easy task here we just need to copy over our h5 model from our root folder into our app folder so i'm going to go and grab that remember when we went and finished training our model out of it we got this model called siamesemodel.h5 so all we need to do is grab that and paste it into our app folder that's that now done cool so we can minimize that and that is step five now done so this is going to or this is needed so that we can actually reload our model from disk so basically when we actually go and use our model we're going to be able to reload those weights up now we're on to the good bit so this is where we're probably going to write a little bit more code so we actually first up need to build out template kibi app so we're going to bring in some dependencies we're going to build our layout we're going to write our update function and again i'm going to go through this in detail but our update function is going to be used to refresh from our webcam so think of your qiv app as standalone we need it to be continuously updating so we can see our head moving inside of our app and then we're going to bring over our pre-processing function from jupyter again cool but first up what we need to do is import some dependencies and you probably thinking nick where the hell am i importing these dependencies well we actually need to create a app file so we're going to call it face id because that's really what this is all being to do with we're performing our verification so i'm going to create a new file and i'm going to call it face id dot pi and then from here what we're going to start to do is import our qv dependency so we're going to import qv dependencies first and we're going to need some other dependencies as well so import other dependencies uh pen densities and these are really going to be like tensorflow opencv pretty much the stuff that we've used to actually build up our app but first up what we're going to do is we are going to import our kiwi dependency so let's go on ahead and do that now there's going to be quite a few but don't forget we're going to take a step back and actually take a look at what we've imported so let's do it okay that is all of our qv dependencies now imported so we've gone and written what is that six lines like eight lines of code there now let's actually take a step back and take a look at all the stuff that we've wrote so these first two up here all to do with our app layout so for that i've written from kiwi.app import app so this is going to be our base app class then i've gone and imported our box layer so this is just the layout that our actual kibi app is going to take so if we actually go into our documentation and type in box layout so basically this is sort of what we're going to be getting we're going to get a vertical or horizontal box in our case we're going to be using a what is it i think a vertical box because we're going to have our camera at the top then a button and then whether or not we're verified or not inside of that application so we're going to be using this type of layout uh where are we again so we're up here okay so that is our app class imported in our box layout so from qv dot ui x dot box layout import box layout and this is in camel case cool then the next couple that we've imported i really just think of these um as asset classes so import qv i'm going to call them asset or actually the ux components right really so i've written from qv dot uix in dot image import image so this is going to be used for our real-time webcam feed but you'll see that later because there's some magic trickery that we need to get in order to work and then i've gone and imported the button component so from qv.uix.button import button then i've imported a label so it's going to allow us to work with text so from qv.uix.label import label so really three ux components so image button and label then we've gone and brought in some other stuff import other heavy stuff and so the first thing that we've brought in there is from kiwi.clock import clock so this is going to allow us to make continuous updates to our app and this is what's going to allow us to get a real-time feed from our qb app because natively we're not actually running a loop so we need something to pre-process those updates to keep getting a real-time feed from our webcam and this is what we're going to be using the clock for then i've imported texture so from kiwi.graphics.texture import texture and this is something that i had to play around with a ton when i was actually setting this up so we actually need to convert our image from our opencv webcam to a texture and then we set our image equal to that texture so it's a bunch of messing around in order to get this to work but it does work i've tested it so we've imported that so from qv.graphics.texture import texture and then i've imported the logger so this is going to be towards the end that we actually use this but it's going to make your life a bunch easier so you can actually see some metrics so from qv.logger import logger and this is particularly nice if you don't want to show this information to your users but you still want to see how your app is performing and what's actually happening behind the scenes so that is pretty much all of our kiwi stuff now imported so all up we've written one two three four five six seven eight different lines of code to import our qv dependencies so let's save that so that is our dependencies for qb now imported now i didn't include it as a separate set but we actually need to import some other dependencies as well so while we're here let's just go ahead and import those okay so those are our last couple of dependencies now imported so we've gone and written one two three four five different lines of code so first up what we're doing is we're importing opencv because we know undoubtedly we're going to need this when it comes to accessing our webcam we've then gone and imported tensorflow so import tensorflow as 2f we've then gone and imported our custom distance layer so this is coming from our layers.pi file so we're importing that so to do that i've written from layers import l1 dist which is bringing in this class over here then we're importing os so this is going to make it easier to work with our file path and then last but not least we're importing numpy so import numpy as np so that is it when it comes to importing dependencies and yes we've brought in a ton so there's a ton of dependencies so the next thing that we actually need to do is we can actually mark that as done that's done the next thing that we want to do is actually start building our layout so let's go on ahead and do this and then we're actually going to start our app and see if we can get it running so let's start building our layout so i'm just going to say build app and layout let's do it okay so that is the shell of our app started so i didn't i'm going to take this step by step as per usual rather than than going balls to the wall and actually building this all up but so what we first need to do is define our app so in order to do that written class cam app inside of caps and then we're passing through our app so this is going to ideally perform a little bit of inheritance so we can use our app and then the first method that we've actually defined is the build method so def build and then to that we're passing through self build is a inherent uh function that you actually need to define when working with kiwi so if you actually take a look can we see it here let's go into our basics so you can see in pretty much this one here is a good example so this is an example qv app so you can see that we've gone and written or they've gone and written class test app and then they're passing through app and then they're using the build function so again we're doing we're pretty much following that structure as well cool but that is the beginnings of our app now done so we're in class cam app in caps and then we're passing through the app class that we imported from right up here with a colon and then within def build and then we're passing through this ourselves so the self object colon and then pass at the moment we're not actually doing anything here but we are in a second and then we're just setting up our run so this is just going to make it cleaner and ensure that we run successfully so if underscore underscore name underscore underscore equals equals underscore underscore main underscore underscore quote colon then we're going to run our cam map so we're going to run cam also capital c am capital a p p dot run so if we actually go and test this out now the way that you actually run this app is just make sure you're in the same folder so what we need to do is run uh python face id or pi now we we don't actually have our app actually doing anything at the moment but this is effectively what's going to run once we've got that running all successfully so again nothing's going to open up because we don't actually have anything running but it doesn't look like we've got any errors just yet so it looks like we're all good well okay so we do have a bit of an error but that's perfectly fine we're going to get to that in a second so what we now need to do is finish building up our build function so let's go ahead and do a little bit more there okay so those are the beginnings of our layout components now written so we've gone and written three additional lines of code there plus one comment so really this is going to form the base structure of our app we're going to have an image at the top which is really going to be our real-time webcam feed we're then going to have a button which we click to go and verify and then we're going to have a label at the bottom which basically tells us either the verification hasn't started it's completed and you're unverified or it's completed and you are verified so let's actually take a look at these so i've got three different objects we've written self dot image one so this is going to be our main image and we could actually make this image but i'm just mindful that we might have certain classes that use image and then we've actually gone and used that image classroom up here and we've passed through one keyword argument which is size hint and set that equal to the full width but 0.8 worth of the horizontal height so basically this is sort of saying setting the dynamic layouts of each one of these components our image is going to take the majority or 80 of our vertical height then we've set our buttons so self dot button equals button and remember this is going to be importing from the button class up here and we're going to pass through two keyword arguments so we've gone and set our text equal to verify and what we're going to need to do eventually is set the onpress function to be able to run a certain function but because we haven't defined that function yet we're not going to do that we'll come back to that so we've written text and then we've set that equal to verify so our button is actually going to say verify and then we set our size hint equal to one so the full width but only ten percent of the height so we've got one and then comma dot or point one and so remember with that image we said our size hint to 1 comma 0.8 inside of a set of braces which means it's a triple we're passing through our size hint for our button is going to be 1 comma 0.1 cool so really just think of this as text and then how big or dynamically how big you want that button to be and then we've set a verification so this is going to be our text which actually tells us whether or not we're verified or not so i've set that equal to a variable called verification so we've said self.verification to a label which is from right up here so we said remember our three core ux components are image button label we're using right here image button and label so again we've gone and set two keyword arguments for our label class we've set text equal to verification uninitiated this is going to be what it starts off as right so eventually once we go and trigger our verification you'll actually see that it goes verified or unverified but for now we are going to set it to verification uninitiated and we've set our size hint to one then comma point one so there's just a couple of additional lines that we need to write before we can actually kick this off and see some stuff so let's go ahead and write that up but first up just i guess to recap there we've gone and created three new components so an image a button and a label and these are going to show up in our app in concert in sequential order so we'll see our image first which will eventually be our webcam feed then a button then a label all right let's go and finish this up and at least test this layout okay so that is our layout now pretty much done so i think this is the shell of that app now effectively set up so i wrote an additional five different lines of code there so first up i've written layout i've created a new variable called layout and i've set that equal to box layout so remember how i was describing right at the start that we had this box layout class which basically either gave us a horizontal layout or a vertical layout so we've gone and set that to vertical so our objects are going to appear in sequential order from the top down so we've gone and written layout equals box layout and we set orientation equal to vertical and then we've just gone and added our three widgets so keep in mind right now we've created these ux components so image button and label but we haven't actually added them to our layout so this is akin to creating a module in python but not actually importing it into your notebook so we've created these ux objects we haven't actually added them to our kiwi layout and that is exactly what we do here using the add widget method so i've written layout.add widget and then we're passing through self dot image one so it's coming from up there so it's going to be our image and then layout dot add widget and then we'll pass in through self dot button so we're adding our button to our layout and then layout dot add widget and then we're passing through self.verification which is going to be our verification text so i think based on this we could probably actually try starting this up and see what we actually get so let's try running python now let's clear this so it's a little bit clearer so we're going to run python and then face id dot pi let's see if that opens up successfully and we've got um so run is missing one positional argument itself oh we've actually gone and defined this this should actually be the actual class so i've gone and missed out a set of parentheses there so let's actually try that again so before we had no parentheses there and i was wondering why it wasn't actually working but we actually need a set of parentheses down here because otherwise we're just referring to the class we're not referring to our particular instance cool or we're not creating a new instance so let's actually go and try that that's looking positive and there you go so that is our base layout so it looks kind of shitty right now but that's perfectly fine we're going to add our webcam in a second but you can see that we've got our button let me zoom in on this so you can see it a bit better so this is eventually going to be where our image or our webcam feed is going to be we've got our verify button and we've got our label down the bottom which says verification uninitiated so we can click our button right now it's not doing anything because we haven't actually hooked anything up to it but that is our base layout now done so we've got our verify button and we've got verification uninitiated so if we actually close this it's going to stop everything down now just to prove to you how this sort of widget layout works so if we actually went and sub these around so say we put the button below verification let's actually see what that does to our layout all right so let's try running that again right so you can see that our text is now above our button right that actually kind of looks better than what i had but that that sort of gives you an idea as to how this actually works so the layout is in a sequential order so the way that you add the widgets are going to determine how our layout actually forms let's actually add it back i wonder if hot reload will actually pick this up that's not going to pick it up that's perfectly fine so we're going to shut it down and if we rerun it again you can see that our order will be back to normal so it'll be image button and then label say image button and then label cool that is the beginnings of our layout and now successfully done so what is that so we've actually gone and done step seven now done pretty cool right so that is our the beginnings of our layout now the next thing that we actually need to do is actually set up a feed from our webcam so at the moment you saw that we had that sort of blank area at the top for our webcam but it's not actually doing anything as of yet well what we need to do is we need to set up opencv to be able to grab our webcam and then we need to refresh that image so that we're not well we're effectively looping through and continuously reading the feed from our webcam and that is effectively what we are now going to go on ahead and do okay so first up what i'm going to do is i'm going to create a captcha inside of our build function so this is going to be from opencv so let's go ahead and do that and then we need to go and create an update function that basically we use to update our fee so let's first up go and set up our captcha and this is really no different to how you'd go and set up a video capture device when you're working with opencv so let's do it okay so that is a video capture device now at least initiated so i've written self.capture and i've set that equal to cv2.video capture and i've set video device for now you might need to play around with this depending on which device the webcam is i say it all the time and i'm currently working on an opencv series to show you guys how to actually determine what that number is but on my machine it's going to be four um if you have no other webcam devices apart from your main webcam on your machine then it's probably going to be video to capture device zero but you might need to play around with this in order to get your webcam feed up so i'm gonna set it to four but i might also if i didn't have any other devices on my machine i'd probably set it to zero um to make sure that i actually get a feed if you get something returned back which sort of says like cvt is empty or cv2 dot empty or cv2 is having issues with i am right it's probably because either your webcam is unavailable or you've got your video capture device number set incorrectly okay so now that that is done we actually need to do something to actually update and read our feed from our webcam so we're actually going to need to use our clock function or our clock class to go and set a schedule to continuously run this function but first up let's actually go and define it okay so i've gone and set up a new function called update so this is going to run continuously run continuously to get webcam feed so there i've written def update and then i'm passing through two positional arguments i've written self and then we're going to unpack all of the other arguments that are passed to it so really it's just def update and then we're passing through self and then asterix args then i'm closing out the function and i've passed you a colon and at the moment i've just set that equal to pass so if i actually went and ran this right now you wouldn't actually see anything different so again to run it we just run python and then faceid.pi from that folder so if we're going to take a look at this it's really going to be no different right so right now that we haven't actually done anything so we're not actually one rendering our image to our layout and two we're not actually running this update function as of yet so we need to go and first up write this update function and what it's going to do and then we actually need to trigger it from up here so let's go on and start writing up this update function okay so the next two lines that i've written are basically going to read our frame from open cv so i've written a rhett comma frame and these these are really just sort of standard opencv lines and if you actually take a look inside of our jupyter notebook we would have written this inside of our real-time test so exactly the same as what we've got up here nothing crazy we actually don't have the last channel so if i do that pretty much identical then okay so that is basically reading in our frame from opencv so i've written red comma frame equals self dot capture dot read so this is grabbing our capture device from up here and it's reading the frame and this is going to return two things a return value and the actual frame as a numpy array then we're setting and we're cutting down our frame so rather than having the full what is it uh 480 by 640 pixels we're actually going to cut it down and be 250 by 250 which is effectively what this slicing is doing here so i've written 120 colon 120 plus 250 so we're effectively going to start never remember whether or not it's height or width first so i think um 120 is going to be the shorter so that's going to be height so we're effectively going to be grabbing 250 pixels from our height and then we're going to grab 250 pixels from our width so we're cutting down the image so we've only got a little square box so 120 colon 120 plus 250 comma 200 colon 200 plus 250 comma and then colon to grab all of our channels cool that is our frame now captured now again we could run this but we're not going to see anything to the screen so what we need to do is actually go and update this image from over here so we can actually see it so that's effectively what we're going to do now so let's wrap this up and then we're actually going to set up and trigger this update function okay so that is effectively going to go and apply our image texture to our image from up here i'm really not happy with calling this image image one so i'm actually going to call it self.webcam so we're going to change that up there and what we also need to do if we're doing that is we need to change that down here so let's actually so what i actually went and did is i went and rewrote herself instead of self.image1 i've written self.webcam so let's actually change that web and that makes more sense cool all right so rather than that being image one that's now going to be webcam all right then let's actually take a look at the four lines of code that we wrote down here so i've written buff equals cv2 dot flip and then we're passing through our frame and specifying zero so this is going to flip our image horizontally and we're going to convert it to a string then what we're doing is we're actually commencing the conversion of our image to an image texture so this is basically just a format that i realized i had to go and convert it into so that we can get our frame to show up so basically there's a bunch of arguments but it effectively allows you to convert this image into a texture so let's actually take a look at that so i'm going to open up the doco texture and it is this so uh so the texture is a class that handles open gl textures depending on the hardware some opal opengl texture depending on the hardware some opengl capabilities might not be available blah blah blah so basically what we're doing is we're actually converting our image to this texture and it's pretty much what you can see down here so this allows us to actually see um this image on our screen and we do use this blip buffer function so this actually converts our or can we first up convert our image into a buffer and then we effectively render it as a texture the full liner code or next couple of lines of code that we've written is image underscore texture equals texture dot create and then we're setting that up with passing through a two keyword arguments so we specified size and then two that we're passing through our frame height and width so frame dot shape one comma frame dot shape zero and we're setting that inside of a tuple and then we're passing through a keyword argument called color format and we're setting that to bgr because that is the way that opencv actually brings in an image and we've got a error there i don't think that's wrong expected expression i think this is actually should be fine i remove that what are we getting okay yeah sorry no we actually did have a second one because i added it in there okay that's fine and then the so then what we're doing is we're converting this opencv buffer which we had over here so remember we set that equal to buff and we're actually applying our opencv image and converting it into a texture so that is effectively what we're doing we're taking our image we're converting it into a texture so that we can render it inside of our app so then we're using image underscore texture and then we're using the blip buffer method which i just showed and we're passing through this opencv uh string function or string value that we had from over here through this split buffer method so we're grabbing this we're passing it through to blipbuffer then we're setting color format equal to bgr again and buffer format equal to u-byte then what we're doing is we're actually grabbing our webcam object which was all the way up here and we're converting or we're setting the texture for that equal to image texture so again it's a long-winded bass process really but what we're doing is we're getting our image we're flipping it horizontally we're converting it into a texture and then we're setting our webcam image object from up here equal to that texture so basically it's a long-winded way to be able to effectively render our image in real time okay now that that's done what we can go ahead and do is i'm a little bit concerned that we've got an error there expected square bracket why and we'll leave it we'll come back to that if we get any errors so now that that's done we still need to actually go and run this update function and this is where our clock class is going to come in so let's go ahead and do this okay i think that is pretty much it done for our real-time feed so i've gone and written one line there so i've written clock dot schedule interval so think of this as um triggering off a loop but basically it's going to tell our app to go and run a certain thing every x number of periods so i've written clock dot schedule underscore interval and then we set that equal to self dot update so we're actually running this function on this interval so clock dot schedule underscore interval and then inside of that the first argument is self dot update so what is it that we actually want to run so it's going to be our update method and then we're specifying how frequently we actually want to run that so if we actually go and take a look at our schedule interval function so schedule interval you can see that this is going to be so schedule an event to be called every x second so it says timeout seconds but it's basically how frequently we actually want to go on ahead and run this so we are going to be running it once or 33 times every second so that's 1.0 and then backward slash 33 divided by zero so basically it's going to keep running and ideally mimic what you'd expect to see with the human eye okay that's a lot of work to be able to get a real-time webcam feed so it is a little bit tricky with kivy but basically we've gone and defined this huge update function and we've gone and defined our video capture devices so all things holding equal still a little bit concerned about this why we're getting an error there pretty sure it should be fine but that's fine let's go and test it out so i'm just going to clear this that's a little bit cleaner so we are going to run python face id dot pi we're still getting that error why oh because we're not actually grabbing an object this should be frame there we go all right all right so if we go and so basically i was slicing nothing which is why it's thrown it's saying hey we expected a square brackets but right now we don't actually have anything in there because it's actually telling us convert that into an array but we don't want to convert that into an array we want to slice our array so if i type in frame that is going to fix that up all right let's go and run this now all right fingers crossed let's see if that works oh my god it's working so you can see that we now have our webcam feed so that's my little head in there and you can see that we actually have our real web time or real webcam feed or real-time webcam feed now appearing inside of our qivi apps and had i know it was a lot of work to be able to get that but you can see that the speed is pretty quick we've got our face there and this is akin to what you might have when you're using like face id on your iphone so that is very good so we've now got that successfully running so i know we can bask in the glory we've got this up and running we it's pretty much it like honestly guys that was successful no we're not going to stop there so let's go ahead and close this go back to act do all right so we have now successfully gone and done that so we've gone and built up our update function and we've gone and let's actually say and render webcam so a lot of effort to go on ahead and do that but basically what we did in that section is we went and first up set up our capture device to be able to go and get our webcam we then went and wrote this huge update function which is basically it's not huge which is basically reading our webcam feed it's slicing down our image so that we've only got 250 by 250 pixels we're then converting our opencv array which is an image into a buffer and then we're applying and then we're converting it to a texture and then we're actually rendering that to this webcam object from over here and then we're triggering it we're using the schedule interval function to be able to go and trigger our update function so it's in real time okay that's a lot now the next thing that we need to do is what so we now need to bring over our pre-processing function so remember we need to pre-process our image before we pass it to our tensorflow model so that is what we're going to go ahead and do and it just so happens that we can just grab this out of our jupyter notebook so the pre-process function is towards the beginning i think where do we do this this was probably during the data engineering bit um pre-process twin should be close pre-process we need this so we're just going to copy this and we are going to paste it into our app and we're going to tab that in that is our preprocess function and now brought in so uh we could clean this up a little bit but i'm not going to bother but basically we've written we've copied in the function that we defined in step 3.2 so that meant it would have been in episode three where we basically go and pre-process our image from a file path now the reason that we're going to use this is when we go and perform our loop we're going to pick up our images from a file path we're going to pre-process it then pass it to our model but that is that step now done so again pretty straightforward so we grabbed our preprocess function and dumped it inside so if i just zoom out a bit because we've got a lot in here so by now inside of our cam app class we should have three different methods so we've got the build method which is our base method we've then got our update method which is running every one over 33 seconds to be able to update our webcam and then we've got the pre-processed method which is what we're going to use in a second so we can save that let's go back to our to-do so we've now gone and done step nine zoom back in okay then the next thing that we need to do is bring over our verification function so let's go on ahead and grab that as well so i'm just going to say let's actually add a comment here so this is going to be used so our pre-process function is going to be used to load image from file and convert to 100 by 100 pixels okay now the next thing that we're going to do is bring over our verification function so bring over verification function we're almost there guys we're we're getting close now so let's go ahead and grab that so that is towards the bottom so after that it's after step six wow we wrote a lot of code for this didn't we uh okay so it's over here so we need this verify function over here so i'm going to copy that and i'm going to paste that in here let's tab this in okay so that is that's that's a terrible comment it's a verification function to verify person okay so what we now need to do is let's save that looks like we've got a few errors so these are easily solved we'll come back to that in a second so that is our verification function now done so we can set that to done now let's quickly take a look at that verification function because we've already got a few errors and this is because it is asking to pre-process our or use that preprocess function but we don't actually have that defined because we need to type in self. self dot preprocess and self preprocess and we are still having errors and that is because we need to pass through self over here oh and we also need to pass through self over here that's why okay cool sorry that's my bad so the reason that we're getting errors is we're not actually passing through our current class to our verify function so as soon as we add self there we should be good to go okay so there's not much left to do now so what we want to do is we are going to we effectively need to just ensure that we save down an image from our webcam because remember when we had a loop previously we effectively took a snapshot from our webcam we then saved it into this path over here so application data and then input image and then input image.jpg so we need to ensure that we save our current image into that folder as well so what we're going to do is do exactly that and then we should effectively be able to go in ahead and build this we also need to load our current model so we've got this we can actually probably remove these yeah let's just make sure we can handle any args and then what we're going to do is we're actually going to set our detection threshold let's set that to like 0.5 as per usual what was the other one verification threshold so these are going to be the metrics that we use to determine how tight we want our detections to be so we're going to set those first and specify specify thresholds and then what we need to do is capture input image from our webcam and we're going to specify a save path i'm going to set that equal to this so effectively what we're going to do is we're going to capture real-time feed from our webcam we're going to save it into this folder which is going to be application underscore data input underscore image and then input underscore image.jpg so if we actually go in actually we don't even need to do that we can go from here so if we got an application data input image and we're going to effectively replace this image so that's what we copied over from our existing what was it from our existing code inside of a drupal notebook so we're effectively going to be replacing that so let's finish writing out the code to actually go on ahead and save our image from our real-time webcam into that file path okay i think that is looking good so we've just gone and written uh pretty much replicated what we had from up here in our update function to be able to save out our image so i've written red comma frame equals self dot capture dot read and then we're effectively reshaping our image to be in the shape of 250 by 250 and then we're going to be writing that out to a safe path now keep in mind that we've still got a couple of things that we need to handle right so right down here so we took out the existing arguments from our verify function and now we're getting this error over here which saying model dot predict is having errors and this is because we haven't actually gone and reloaded our model and now that we've taken it out of our function so what we actually need to do to solve that is we're actually going to load our model inside of our build function so i'm just going to set this to let's actually go to our build function and load that in first so under our layout i'm going to say load keras model the tensorflow keras model and we're effectively just going to load our mods so we're going to create a new variable called self.model and we're going to set that equal to how we actually load up our keras model so let's go ahead and do that okay so that is our model now loaded so i've written one line there so i've written self.model equals tf.models keras.models.load underscore model so that's actually going to load up our model from our h5 file so we've written load underscore model and then siamese model.h5 and then we're passing through our custom object so custom underscore objects and then inside of squiggly brackets or inside of a dictionary we're specifying l1 disk or the l1 disk key oh don't screw that we're setting that equal to l1 dist or the l1 distance layer that we had from up here so that is our model now loaded now if we go back down to our function over here inside of this line here which says result we can change that to self.model.predict cool i think that is looking good so wait hold on we've got one more thing to do so we haven't actually gone and updated our verification label right so if we take a look remember we have this verification label here we actually want to output whether or not we've successfully verified or not let's actually quickly take a look at our to-do list so um update verification function to handle new paths and save current frames so we've done that we just did that there so that is effectively what we did here so when somebody goes and clicks verify this is going to trigger so we're going to go on ahead and save our image from our current frame so we're going to run self.capture.read get the current frame from our webcam we're then going to cut it down and we're going to save it inside of this save file path we should effectively see our input image change every time we hit verify but what we actually also want to do is we want to update this verification text so it goes from verification uninitiated to something else right and that is effectively what we're going to do down here and then we should be kind of done what else do we have to do oh we're apt we're actually up to that step so update verification function to set verified text and then we need to specify link the verification function to the button yeah we've got to do that as well okay cool so let's do the text bit first so let's go ahead and do it okay that is our verification text now done so we are going and effectively updating our verification object hold on wait we are overriding this object over here so this is no good we got to change that so this over here we're going to change this so if i leave it as it is but uh we're actually updating this might work let's actually just play it on the safe side so i'm just going to convert this object here to be verification text and i'm going to set this actually let's say verification label and verification label over here so i don't know if you saw what my error that was there but basically we've got from our existing verification function we've already got a variable called verification so rather than have overlap even though we're getting this one from our object i'd rather keep it nice and clean so i'm just going to change this to verification label and remember our verification function is going to return back this verification value here and this is whether or not our detection is actually passing a certain threshold so what we actually want to do is we don't want to go and overwrite this verification object we want to set our verification label equal to verified if that verification comes back as true otherwise we want to set that label to unverified if it's false right so i don't want to go and overwrite that label hence what we did is right up back in here inside of our build function we've gone and converted that verification object from to be called verification and we've converted that over to verification underscore label so that should be okay and then down here what we're doing is we're setting self.verificationunderscorelabel.txt equal to verified if verification comes back as true add a space there else we're going to set it to unverified so we go and save that now i think there's only one last thing to do so that is done two more things and so the last thing that we actually need to do to test it out is to link this verification function to the button so what we need to do is go right back up to our build function and inside of this self dot button class here what or variable or object in this case what we're going to do is we're going to set on press equal to what is it says self.verify then yep so we're going to be setting it running that so it's going to be uh self where are we typing this delete that right up here inside of our build function we're going to set self dot button equal to self.verify we need to run that class that's fine cool alrighty i think that is good to go all right we wrote a lot of code there without running so it'd be interesting to see if this works so let's run it looks like it's successfully loading our model down here so no errors yet let's see if this runs i've clicked our button looks like stuff's happening down the bottom okay and that said unverified okay that is perfectly fine but that is good so what we're actually getting now is it looks like our verification function is actually running so if i go back into the screen a little bit more what about if i did it from here make sure that our new image is being writing or being written out we're going to d drive and then where are we so youtube face id app and then application data and if we take a look at our input image okay so that is good so our input image is being written out unfortunately it looks like we're unverified but if we take the green screen down so if i go and run that okay so we've got a new input image we can say that that's saving still coming back unverified new image okay so that one said verified take a look verified again my hand up unverified unverified verified oh man it's working how good is that if we try it again a little bit blurry we'll see if that works unverified verified of our face unverified verified success guys that is your very own face id app that is now successfully running now i wanted to show you guys a little bit let's actually test it out a little bit more so what happens if we put another person's face up against well if mike was away what happens if we actually put somebody else's face up against this so if i went and got jim carrey my guy let's try this so if i go and put the glaze a little bit much we verify against all right jim's unverified um let's try someone else robin williams try that come on focus oh cass oh it's verified robin williams that's terrible okay okay so now it said robin williams is unverified weird that's me verified against the green screen ah i wonder why it wasn't working before still verified end up verified again interesting i wonder if this is going a little screwy all right so see here now we're sort of getting on slightly less reliable results let's try dropping the green screen a hand up unverified and away verified end up unverified and away now we're unverified come on verify okay that's verified all right so we need to play around with this a little bit more you can see here that sometimes we're getting good results other times we're maybe not getting such great results and this is ideally where you want to have some logging set up so that you can actually see how well or not well your model is actually performing because you might need to tweak those detection metrics right so what i'm going to do is i'm going to close this down and we're going to show you how to log out some data so let's actually go in to our verify function and we're actually going to use our logger method which is well this is step 13. now done we've successfully verified guys i know it's taken a long time but we got there last thing to do is set up our logo so we can actually see what proportion of detections are over a certain threshold so let's actually go and do this so inside of our verify function we're going to use our logger to log out to metrics okay so those are two lines that i've written in for our log so i've written logger.info and to that we're going to be passing through our results which we get from over here and this is from our existing verification function and then i've also started to calculate a couple of metrics so basically this line here is determining how many of our results are actually surpassing the 50 threshold so i've written logar underscore actually this is not written right yeah this should be like this like that my bad so this is actually going in calculating what proportion of results are actually surpassing our 50 confidence threshold so i've written logger dot info and then inside of that we've got our results which is inside of a numpy array and we're specifying what proportion of those results are greater than 0.5 which is 50 and then we're summing those all together so we'll have the number of results that surpass that threshold so what we can also do is just throw out a couple extra of these and let's let's actually make a couple more thresholds so let's say 0.2 0.4 0.5 and 0.8 now if we go and run this again we can actually get some additional info as to how well our face id model is actually working so when you actually run the logger you're actually going to see the output inside of the console so this is much nicer if you're a developer actually building this up rather than having um this actually showing to your user you can actually just have it sort of inside of your log or if you wanted to actually go and log it out it's going to look a whole lot nicer than actually showing it back to user um so also with the qv app it's resizable right so if we wanted to make it a little bit tighter we definitely could so that looks a little bit better right there okay so let's go ahead and test it out so if i go and drop it down here drop below girl right so we can see what proportion of our detections are showing up there so 50 how many images do we have inside of our verification folder oh so this is saying 50 over 50 50 over 50 pretty much all of them are actually being matched kind of crazy given we've got a green background let's see about robin all right so we're here we're getting 42 38 37 which is what wait hold on this is coming back unverified now input image so we're getting robin what was that detection threshold so we had 50 50 so 37 would surpass 50 that should come back i would have thought that should come back with verified but it's good i mean it's coming back with unverified let's see it again focus all right let's do that okay unverified so this is basically saying that 32 out of the 31 it's getting 32 out of 31 matches so if we go and bump this up higher that seems a little bit weird let's say the 80 the verification threshold rather than being 50 let's set it to um i don't know 80 let's try robin again let's restart our app and this is some of the tuning that you'd ultimately have to do guys as you're building one of these models so keep in mind that once you train your model it doesn't end there you've got to have to read normally you want to have to be retraining this all right so that is centered ish uh all right let's see now okay so now we're getting less so again he's unverified what about ourselves okay so we are way more accurate over here right this is not making sense so we're getting 47 so what proportion are surpassing 80 so we are getting unverified back so these are our results so we can actually see these down here so we are getting pretty good detection metrics but our text down here is still saying unverified so we are getting some ones that are super low so take a look at that 0.09 the majority is still pretty good so i wonder if we've got some issues with our detection threshold so let's try that again i wonder if it's delayed a little bit still unverified still unverified what if we drop the green screen so unverified this is because we've gone and bumped up what our detection threshold down here we're saying that we're still getting 44 surpassing the 80 threshold which would indicate so what is this actually doing so it's going verification equals detection divided by how many images we've got inside of here i would have thought that this would have come back as true so if we actually go and drop this back down so this is some of the stuff that you might need to play with i mean ultimately it works it's just uh fine tuning how accurate our model actually is let's verify now and now we're getting under uh do we changes back we are at what is what's our detection threshold our detection threshold right now is at 50 percent and our verification threshold is also 50 but we are saying down here that our the proportion of results that are above 0.2 and we're summing them but this i've got a feeling we've got something incorrect over here so this is saying np.sum let's log out add detection so logger dot info and detection that detection is going to return back the number of results that surpass our detection threshold from up here so let's go and check this out i mean it works i'm just i want to make sure that we get this right right so if we hit verify ideally we should get the number that is being used to calculate the metric so that is saying 47 right so this is 47 does that say verified unverified verified if so verification equals detection divided by the length of the number of images 47 divided by 50. so print out verification let's delete this we don't need that and let's print out verified logo.info print out verified and so this is the advantage of having your logo right you can actually output and tune these models so rather than just leaving it there you can actually go and fine tune and actually see where stuff is going right and maybe where it's not going so right so here what i'm actually doing is i'm actually logging out the detection value the verification value and then whether or not we're getting verified back so let's see so if we go and run this now we should get back those three metrics okay so we had 43 detections which surpassed our verification threshold and we are getting 86 and we're getting back the value true and it's saying i'm verified all right so we do have a bug there so verification guys here it is so this should actually be verified so rather than this value here being verification this should actually be verified so if self.verification underscore label.text equals verify or basically we're specifying if verified equals true then return verified else return unverified so this was verification first which would actually return a detection value divided by that so right it's not actually returning kind of weird that it will getting verification then i don't know let's go run this and that is the advantage of using the logger you can actually find out where stuff is going right or wrong if we go and verify with a blurry image all right so now we're verified cool and we're saying that we had 50 surpassing the threshold we had a 100 verified so we now are true so if we go and put my hand in front of it let's see so that's still uh surprising i think we need to bump this up significantly so our detection threshold let's set that to i don't know eighty percent let's try that again okay so we're getting verified what happens put my hand up we're still getting verified there right so we might need a little bit of fine tuning that's perfectly fine don't put our face all right so that's going up kind of weird that when you put your hand up it's still saying you're verified all right let's try somebody else robin oh mate robin it's verifying against robin williams no that is not acceptable so we got to bump this up so let's say a threshold could actually be 90. so what are we getting that we're getting back very accurate results down here so if we actually set this to 0.99 let's see how that works kind of weird that it's verifying against him appropriately all right so that's us verified and we are getting back all of our positive results that are surpassing it let's try jim i don't know maybe robin williams is able to bypass our threshold and it's saying that jim carrey is verified getting 0.99 there is something not going quite right here so that input image is saying that we are verified wait hold on where are we running this from let's bump up our thresholds again all right so that's me verified what about jim okay so jim is unverified so i've gone and set the thresholds really really high now that detection threshold is 0.99 and our verification threshold is 0.8 so if i go and do myself we are verified if i go and do gym [Music] jim is unverified okay so in this particular case we had to set our detection threshold super high to ensure we didn't let jim in but in a nutshell that is our kiwi face id app now done guys so again you might need to do some tuning with your detection thresholds and verification threshold so i've gone and set it super super high so basically our detection threshold is 0.99 and our verification threshold is 0.8 now keep in mind what you could also do to hedge against any of these types of issues is you could have way more images inside of that verification folder so that you get a better chance of busting out anybody that's trying to break in versus letting people who aren't necessarily verified through to your app but in a nutshell that is our app now done you could also go and train your model a ton more as well so we can say that our logger is now done so we've gone and done a ton of stuff guys so we went and imported all that qv stuff we built our camera app we went and wrote our update function brought in our preprocess function and update our verified function and i sort of showed you how to tune um this detection threshold as well so remember the logger is super important and it allows you to see what your detection results actually are like and in this case you don't want to let jim carry or robin williams through so you want to ensure that you set these appropriately so the metrics that i ended up with were a detection threshold of 0.99 and a verification threshold of 0.8 so again we could set this even higher if we wanted to we could even add more images into our verification folder but on that note that about does wrap it up hold it right there so while this model and this application worked it was far from perfect you saw that robin williams was able to bypass our verification so after two and a bit hours of recording this tutorial i went off to korean bbq and really thought about what we could do to improve this model now i'm going to walk you through exactly what i did to achieve significantly superior performance alrighty guys so you saw in v1 of the model we had a little bit of an error right so this image was being verified as being positive this image was also being verified now as a business user you'd probably take a look at this and go hey the value of this particular model isn't that high because it's letting someone that shouldn't be verified actually through to our model now what i ended up doing is taking a look at how we could actually improve this model so i sort of went through my standard process normally i try to add more data apply data augmentation train for longer and see if that improves it if not then we could also do a little bit extra in terms of changing our model architecture maybe doing some additional pre-processing and going from there but in this case i want to take you through what i did to significantly improve the performance of this model so if we jump back into our code and again this is the standard code that we actually wrote as part of our siamese neural network series now i did a couple of key things but namely they revolved around data augmentation so what i actually went and did is i went and augmented our existing positive and anchor data to be able to produce significantly more data so this meant that rather than training on 300 images we trained on 3 000. i also went and improved our performance and logging metrics so you could actually see what our precision recall and act no just about precision and recall would actually look like so let's actually take a step by step and see what change in this code so i'm going to zoom in a little so you can see that a bit better so if i scroll on down the first change comes right about here in section what is that section two so what i went and did is for every single image inside of the positive and anchor class i went and applied data augmentation so i defined this new function here called data org and for every single image inside of our anchor class i went and looped nine times so this would mean that for every anchor and positive image we would now have 10 times as many images including the the one that we already had there so we had nine additional images for every single image we had in there originally and i went and applied what is that five different random data augmentations so i went and applied random brightness random contrast random flip left and right so this would flip the image left or right the brightness is obviously going to improve or increase the brightness or decrease the brightness the random contrast would increase the contrast or decrease the contrast this would flip it left or right random jpeg quality would improve or degrade the quality of the jpeg image that we actually had and random saturation would actually change the saturation so when and did all of those different data augmentations so five lines of code there's this one was commented out i think i actually got rid of the crop because it was a bit too much so that gave us nine different images and down here this is actually a redundant code this is just part of my testing but you can see this is what i actually would go and do for a real model so i went and tested on a single image to see what the impact of this data augmentation would actually be let's actually test that see if it works we need to import os uh and we need to define anc path which was right up here i didn't prep to run this for you guys but i sort of wanted to show you we haven't imported anything let's test this and data org is not defined so if we run that right so this would actually go through and we need to import uuid right so this would actually go through and create another nine images so if i actually go into my anchor path now i just created new images by going to youtube and uh where are we face id application data anchor so if i take a look at data that we just created so today is the 24th of the 10th so you can see i just went and created all of these images here so by running through our data through our data augmentation so you can see that that one i don't know what's happened there so you can see they've got varying levels of quality right this like you can see the striations i don't know if you can see it through youtube's video but you can see that that's getting significantly worse that one's got a flip applied that one's got a flip applied that one's got a flip applied again and you can see significantly degraded performance in that image and again so that's even worse right so this is reducing the quality of our image so that our model at least has a better chance of generalizing and performing well so that was all of the data that i went and ended up producing and you can actually see right down here let me zoom in there's 4 899 different images so before we started off with 300 now we had 4 800 in our anchor folder and in our negative we didn't do anything positive we now had 3320 so a ton of different images so if you actually scroll you can see that we had this probably our original quality image or not even that's probably our original quality image and you can see that we've got and applied flips we've gone and applied additional data augmentation to actually produce more data to actually go and train them so that was the key change right and this significantly improved model performance so if you actually go through what i went and did is loop through every single image inside of our anchor path and then i switch this over to our positive path buzz path and swap that out there that out there and that out there so that basically gave us significantly more data to actually go and train on then i went and brought that data into our data set so before this would have previously been 300 over here so we only had 300 images per data set i boosted that up significantly into 3 000 images so we now had significantly more data to actually go on ahead and train and then i went and trained so if we scroll on down i didn't change any of this oh actually no i changed it so i changed the shuffle buffer size because what i actually noticed is that when we had a significantly larger buffer we needed to up this buffer size so that we appropriately buffered or appropriately shuffled our data set so i bumped that up as well to buffer size 10 000 and then the final change was really the training loop again no changes there no changes there and right down here i actually updated this training loop so first up i imported our precision and recall a little bit earlier on and i actually set it up so remember when we were training we didn't actually have loss or performance metrics which is kind of a no-no because you don't really know how well your model is performing and whether or not it's gone completely off the deep end so this significantly improves the ability to log out our precision and our recall so first up i created a recall and a precision object i then updated it for every single batch so r dot update states would update recall and p dot update state would update our precision and then i logged out those metrics so first up i logged out our loss so i managed to get that working as well i logged out our recall so i dot result.numpy would give us our recall and p.result.numpy would give us that precision and right down here you can see the actual performance matrix so on epoch 1 we had a loss of 0.85728246 we would have a recall of 0.944 and a precision of 0.995 loss dropped to 0.16 on epoch 2 got our precision recall of 0.979 our precision of 0.99 and then i trained for five epochs and i think actually stopped under epoch six because we're pretty good right um so loss was 5.01 to the power of negative zero five and then we had uh a recall of 0.996 and a precision of 0.996 so again still a very high performing model but we're not getting those ridiculous precision recall metrics of 100 then i also updated the evaluation to actually calculate it on the entire test data set so this gave us a recall of a percent which again still a little bit sketchy i'm always suspicious of that and a precision of 0.998 so again way better the last and final thing that i did is i saved this out as siamese model v2 so again this is part of the process guys when you're actually going and building a machine learning model remember you need to iterate it doesn't just stop after you've trained it once you need to go through ensure that this model is performing well so under v2 i ended up saving that so you can see that right inside of our root folder i then had a siamese model.h5 and a siamese model v2.h5 and what i did is i took that model and i dumped it into our app folder so we then had a siamese model v2.h5 inside of there and to actually bring this into our application i changed the model that we loaded inside of this line here so rather than loading tf.models.loadmodel siamese model so previously it would have been this i just changed it to v2 that allowed us to bring in a second version of our model now this is model ops in a little bit of a sketchy way but you can see that really quickly you can go and change update your model so if you wanted to train this on yourself really easy right you can just go through that same workflow retrain your model to be able to go and train a verification model on yourself but now the part that you've all been waiting for to actually see what the model performance is like so inside of our app i actually dropped the detection threshold and the verification threshold but you can choose to leave it high if you want um i found that a detection threshold of 50 was a little bit more appropriate and i left the verification threshold at 80 percent but let's actually go and test this out so again we can start it up using python faceid.pi copy break right so that's me so if i take this out so on the green screen let's try it so in this particular case i'm getting unverified so let's drop the green screen we're now verified so you can see with a green screen it looks like it's saying hey that that looks a little bit sketchy but we've still got a reasonable number of verifications we had 29 and 0.58 dropping the green screen we're now absolutely 100 verified so down here you can see that it's saying all 50 images surpassed and matched we had a hundred percent verification threshold you can see that the model is now a lot more sensitive right so before it was passing if i put my hand up in the screen so let's try that you can see that that is now zero verified images right so by throwing up my hand now saying no that there's no way that that is actually appropriately verified um now let's do the almighty robin williams test so this was the one that i was a little bit worried about because i was like robin williams no way should he be passing through and getting through our threshold so if we go and test him drop the screen down so we don't have so much glare so you can see he is unverified the model is working right let's test that again just to prove that i didn't mess around and let me show you in the image folder as well so if we go into app data input images so it definitely is picking up all mate robin so let's try it again come on focus right again unverified and verified and verified and see significantly better performance after going and applying that additional data and retraining um let's try another example so i was testing this out on chris rock again he's unverified and let's test out jim old jim carrey can he get through our verification process nope so absolutely zero images verified in that particular case let's try me again verified verified verified verified and there you go that is our app now updated and performing significantly better than what we had before so this really at least gave you the chance to see what it's like to build a real-life data science model it's not going to end after you train just once you need to go through and performance tune to ideally ensure that you get the best possible model that you can definitely get so remember what i ended up doing is i added way more images after performing that data augmentation and improved our logging metrics so we could actually see our performance as it were training and what i'll end up doing is if you want let me know i can try to share the actual trained model might not necessarily work for you but this will at least give you a chance to see with how big that model is i think it's about 150 megs i'll have to work out how to get it too but that'll actually allows us to go and ahead and perform our facial verification so in a nutshell we are now finally done and that definitely concludes the siamese neural network series where we attempt to produce a state-of-the-art model from paper all the way through to code and we finally got it working reasonably well but on that note thanks again for sticking around that about does wrap it up thanks so much for tuning in guys hopefully you enjoyed this video and thanks for sticking along for the entire series i know it's been a little bit of a long journey but we finally got there and we finally tested it out if you do have any questions comments or queries hit me up in the comments below i'm more than happy to help you out but thanks again for tuning in guys peace
Info
Channel: Nicholas Renotte
Views: 3,413
Rating: undefined out of 5
Keywords: face recognition, facial recognition, face recognition python, opencv face recognition, deep learning, face recognition app, opencv face detection, programming, python, kivy
Id: 43eAC1LMrsU
Channel Id: undefined
Length: 91min 13sec (5473 seconds)
Published: Sat Oct 23 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.