Face Recognition App In Flutter Using TensorflowLite & Google ML KIT

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey welcome everyone i'm yesh mafin and in this video we're going to create a face recognition app inside flutter so basically there will be a login screen and a home screen so inside the login screen there will be a register button and a login button so your user can go to the register button and register their face and their name and inside the login screen if the face is successfully recognized then the user will be navigated to the home page where his name will be shown so we are going to create this mini app in which we are going to create our face recognition we are going to use google ml kit and tensorflow lite so even if your phone does not support face id then also the app will work pretty fine so if you are interested in a face recognition app and flutter then make sure to subscribe my youtube channel as i'll be uploading more of these amazing video on this channel so without any further ado let's begin [Music] so we are on our laptop and let's start this video so basically what you'll be needing is that first you have to open up android studio or vs code or any other editor of your choice then you have to create a flutter project in android studio you can simply go to the file new and then create a new flutter project so this was our step one so basically inside step one create a flutter project after that in step two let me show you the pubspec.yaml so add these dependencies here so what we'll be using we'll use high flutter for storing the data the username and the predictor data and we use camera we are going to use tf light flutter camera will use to capture the image and check the face recognition so tf flight will be using to compare two images the one stored in the local db and another one that we just captured and google ml kit will be used to detect faces from an image and we are also going to use this image package and a lot package for some animations so these package i have added the necessary packages for this video are these ones okay and hive also if we are going to store instead of five you can also use any other storing uh package like shared preferences or secured share preferences you can also use but for the sake of this video i am going to use hive then what you have to do you have to go inside android inside app inside src and inside main folder i'm going to give you the source code of this video inside description box so you can check that out as well and you have to paste this folder right here from the source code name will be gni lips okay so you have to paste this folder right here so in the third step what you will be doing create a assets folder and then you have to paste this mobile face net dot tf light model so this model will be using to compare the images and then checking if these two images are same or not so we are going to use this model from tensorflow lite and also i've added a loading json for the lottie animation that is not required okay and now inside the lib folder we can start writing our code so in the step three what you can do is that let's start writing the code so basically inside my main file as you can see i have ensure initialize the widget binding as i am using the camera plugin so basically inside my main function default what we have is this run app and then we have to pass our stateless widgets but instead of that in above that what i have added is that i have initialized my hive boxes i have initialized hive and i am passing the available cameras inside my phone all right now hive boxes is nothing but a class that i initialized inside utils so if you see the folder structure inside lib i have models where i created a user model so basically it is a simple model which has name array array i'll talk about in a minute what array means here but basically you can think of it like the list of data of the image data that will be storing in the world local database and the name will be the per person's name that is logging in so we are just having these two fields another we have is the page where all of our ui the login page the home page and our face revocation page exist we are going to come back later on the page directory then we have the utils page and inside that i have a local db dot dot where i have assigned all of the high boxes that i am going to use so i basically created a high box known as user details then i have to use the that particular hypebox and initialize that called as user details box then i created a function for initializing and clearing all of these boxes after that i created a local db class here which i'll use to get the user get user name get user array or simply set user details okay so i have added these functions as well so if you need a specific video on how to use hive inside flutter then you can comment below that as well and i'll i can create a video on that specific topic as well all right so inside local db we've covered that then we have utils dot dot i basically added this printf debug function here which i can use inside my app you can use print also but that's a topic for another video now inside common widgets i have created a common widgets class c widgets and inside that i have created a custom extended button which has text context on tap the width of the button and if it is clickable or not great so what i am doing is that i am assigning the width and i am assigning the gesture detector on top with the opacity if it is clickable then one otherwise 0.4 so the basic ui stuff the border radius i have changed and the color i have sent to blue and in the center we have the text that we are passing in the parameter of this widget so we are going to use this widget inside of our app okay so that's it we've covered main. we have covered utils have covered widgets and models as well now let's go to the main directory that is page. now if you see the main. inside of a material app as you can see it is redirecting to the login so initially our user will be redirected to the login page now inside login page i have added a init state which you can use i can simply import the utils here as well so it is not mandatory to i'm just checking if the user is not null or we have a registered user or not so i'm just printing the name of the user here another part we have is the face authentication so i have created a bar here inside the scaffold and then inside the body i have basically created two buttons here so basically inside the column center we have two buttons first is register and another one is login so register will basically register our name and my image inside localdb so that later on when i log in i can simply recognize and match the two images and then detect if the images are same and then i can go to the home page else i'll come back to my login page so we have these two buttons here now i have created a build button widget a custom widget here so basically it is elevated widget dot i can simple ui ui that we are using here now inside on clicked register let's talk about that we are simply navigating to the face scan screen and when i'm login then i am navigating to the face scan screen but i am passing the user as well so i am basically passing the get the user function so i have created a local db class here so from there i am getting the user that is saved after registering so let's go to the face scan screen okay so basically it is nothing but the camera page that we are having so inside face recognition directory we are having a camera page a image converter and a ml service dot dart so these three are the important files that we have to talk about in this video now inside i have as you can see i have created a list of cameras that we are using inside main door dot we are initializing there only then i created a stateful widget known as face scan screen and we can pass user model as well which is optional okay inside of a stateful widget as you can see we have a text field controller and a camera controller so basically let's first run the app and check the ui because it won't be that good if i just explain the ui of this whole screen and let me show you a quick demo how this app looks so let me just connect my mobile phone with the pc so as you can see we have no device selected on the top right but now i have connected my device as well now i can simply open up my emulator like this and now let's just run the app i'm going to speed this part a little bit so that you guys don't waste your time all right so as you can see our app is successfully built and now it's installing inside my android phone so i also forgot to tell about one more thing that you have to do first you have to add the perspective dependencies then you have to add the model the mobile face that model then you have to add the jn9 libs folder one more thing you have to do is that you have to change your minimum sdk version to 21 to at least to minimum 21 okay so you have to do that as well because we are using the camera package so it requires a minimum sdk version of 21 otherwise you will face a error from that cool so as you can see the ui it is pretty easy and pretty simple so it has a register button and a login button so basically if i click on the register button so as you can see you can see the video right here for my mobile and there is a text field above that where i can enter my name and i can simply say capture and it will register the my face and my my name as well so the register is this and we have a login where we can simply click on capture and it will automatically detect our name cool so we have these two buttons inside of our app so as you can see if i click on register here the ui is pretty easy so let me just first show you the ui the body section so inside the build folder we have a gesture detector and inside that we have a scaffold and a stack so basically this is all a stack now in the bottom we have a size box with a width and a height so basically we have width of this mobile screen and height of the mobile screen and then we have the camera preview right here which is complete screen the complete available screen okay then we are having our text field right here so basically this animation this lottie animation is showing right here after that this is in wrapped inside expanded widget so that is where it is taking up all the spacing then i am having a text field so basically i pass the controller so this is where i'll type the name and the name will be stored and then i'm having a basic button so i'm also having a flash button so basically it will glow the flash as you can see right here you have the flash again switch off that as well and i we can simply click on capture button so all of the functionality all of the phase segregation functionality is inside our on tap button so this is pretty much the ui for the camera so let me just go back and put my phone down now let's talk about the camera button here the capture button but first let me just go here and tell you guys about the variables that we've initialized here first we have the text field controller so this is basically the controller then we have the camera controller for which is basically for the camera preview then we have a flash boolean to check if flash is on or off initially it will be set to false then we have is controller initialized which is initially for so basically when the camera will set to initialized then we are going to set it as true otherwise it will be false then we have our face detector so phase detector is coming from the google ml kit so the google ml vision as you can see we have a phase detector class and then we have a ml service which is nothing but the ms service dot dot that will talk about in a bit okay since so i have created an object for that as well then i have a list of faces so the all of the faces that are detected on clicking the capture button will store inside this list right here so these are all the variables inside of a login inside sorry inside of a face recognition screen the camera page now let me just minimize this and this as well and this and this function as well so basically in the init state we are initializing the cam camera controller and i'm passing cameras one so one is basically the selfie camera that i have so basically the face recognition app we are creating will probably use the selfie camera to protect the face of the user and we've also set the resolution preset to high okay then we are initializing our camera using the initialize camera function so it is nothing but we are simply calling the camera controller dot initialize and then we are setting the flash mode to off initially and then we are setting is controller initialize to true after initializing the camera we are initializing our face detector so basically we are using google ml get dot vision phase detector and we are passing the phase detector option as mode is accurate so that the we can detect the faces accurately so phase detector is for protecting the face so if i pick up my phone again we can use the phase detector to capture the face area so it will capture as a square bounding box so for that we are using the google ml kit here let me just close the camera for now now inside of a scaffold we have already saw the ui so let's just go to the functionality of the flash button first so for the flash button what i'm doing is that i'm basically set the state so basically flash if it is true then it will be set as false and then i am changing the flash mode using the camera controller as well now on clicking this capture button what is happening is that i am setting can process equals to false and then i am path then i am creating a start image stream so what this will do is that this will create a stream of images so whenever i click on capture we are going to capture the first frame that is displayed on the screen so basically if can process equals to true then we are going to return back we are going to we are going to exit from this function so if can so after that we are setting can process equals to true and then we are going to detect our frame so here the predict faces from image this function we are calling right here inside this function we are passing the image that we acquired after cleaning the capture button so after that we have a detect faces from image function so inside that we are creating a firebase image metadata so basically it is an input image data and we are creating that here and what we are doing is that we are creating a firebase vision image from the bytes that we have from the image that we just captured okay so we are passing the image so we can use image dot plane 0 and then we can simply get the bytes also we can pass the firebase image metadata that we just initialized above after than what we can do is then we can use the phase detector that we initialize in the init state and we can call process image so that we the phase detector will automatically process the image and then calculate the faces so this result is nothing but a list of faces that we have and now i'm if the result is not empty then i'm setting the phase detected variable equals to result now we are using a weight here so when this function will be completed we are checking that if phase detected is not empty if if we have successfully detected some face in the screen then we are going to go inside this in if statement and then we are going to call our ml service that is nothing but the ml service dot dot and we are going to call the predict function right here what we are going to pass we are going to pass the image the complete image that we have we are going to pass the face detected so basically the first detected face so if the screen has two faces one here and another here then we are going to fetch the first image inside this list and then we are going to pass that to the predict function as well also we are passing a boolean here so if widget not equals to null so what it means is that it is a register case else if the else it is a login case so if the user is not equals to null that means that we have to login if the user is not null that means we have to log in otherwise if it is null then we have to register same as we i'm passing the name of that particular person so i if i'm passing the user model that i'm getting the name from there only as we are getting the name from the controller dot text field that we are having so now basically the predict function will return the user if the user if the widget dot user equals to null that means we this is a case for register then we'll simply pop out and we'll print that user is registered successfully else if the user is null we are going to pass that unknown user the user is not recognized so this if statement will run if the user is not recognized as will navigate the user to the home page okay so we'll navigate the user to the home page so let's just talk about what is this ml service and what is the predict function that we are using so inside the ml service class as you can see we have a interpreter and we have a predicted array so what interpreter is interpreter is coming from the tf lite flutter package so this tf lite twitter package we are we are using the interpreter from there only then we have the predicted array so array is nothing but so let me tell you how we are going to differentiate between the images so for example i registered inside the app and i captured the image and i said the name as a so what it will do is that we are storing in database as the name as a and we are storing a list of array of numbers so basically we are storing some numbers inside this list so we have to convert our photo that we captured by clicking this button and then we have to convert image to array so that we can store this array inside our local db and we can compare these arrays so for example i set my name as a and i pass the array now if i go to the login page and i got array 2 here for example this is array one then we can match that if array two and if array one is same or is a little bit same then we can go to the home page else if it is not that much same then we have to go back to our login page so that is what we are doing right here inside the predict function we are getting the camera image and the phase face so basically what we are doing we are pre-processing so there is a function for preprocess which is taking the camera image and the face and it is calculating a list of input so this is what we are having the input then i am re reshaping the input list that we just acquired from the pre process function and i am then creating a output list as well after creating the output list what i am doing is that i am initializing the integrator so if the platform is android then i am basically going to create this delegate variable and i am initializing the interpreter right here so we have interpreter dot from asset and then we are passing our model that we just pasted inside our assets folder right here okay and then we are passing the interpreter options as well so inside our interpreter option i am passing out delegate and the delegate can be a gpu delegate v2 if it is android as we are passing the gpu delegate so we are passing these as our options for the interpreter and then we are creating this interpreter if some error occurs then we are simply printing that fail to load model so what this will do is that we are using the await key so after running this initialize interpreter we can simply interpret a dot run input and output and what this will do is that we can simply now get the output list from the interpreter so what this predicted array is that this is nothing but we have from the image we have converted a predicted array then that we can use or store inside our local db so if login user is true i mean false then we have to register that particular user so basically we have to set the user details with the name that we passed and the array else what we have to do we have to match if the predicted array and the user dot array that is stored inside the local db is same or not so how we can do that we can use the euclidean distance function right here that we created okay so you can check this about on the internet as well so basically i can show you this function right here so what we are doing is that we are taking the predicted array and we are getting the stored array inside db and then i am applying the euclidean distance algorithm right here so what you can do you can check out the formula on the internet as well of this euclidean distance algorithm and what it basically does is that it calculates the distance between two list that we have so the list one and list two how close or how related these two lists are so if this list is if this dist or distance is less than or equal to the threshold value then we can say that the user is successfully recognized and then we can pass the user or the distance is less than minimum distance so these two conditions should be fulfilled otherwise we'll return null here as well so as you can see inside our camera page we are getting the user from this predict and then we are calculating so inside our home page what i am basically doing is that i am getting the username inside the app bar and i am just having a button a pretty easy ui for the log out button which will simply navigate me to the login page so this is the whole explanation for the face recognition app in flutter you can find the source code inside the description box as well and let me show you a quick demo of how this app works so if i re run the app so as you can see i have register so let me just go to the register and let me just take my phone up okay now what i can do is that i can go to the text field and i can simply say my name is yes marker so i can do this yes market and then hit on the capture button so as you can see that it automatically pops out and inside here you can see that user registered successfully that means that the user is registered successfully now even if i rerun the app because inside our login page inside our init state i am printing the username so as you can see we are having yes marker here as well so now if i go to the login page now i don't have to enter anything any name right here and now i can simply say capture and as you can see i am redirected to the home page where it is saying hi yes mark and my name and you can simply log out from here as well and you will be redirected back to the register and the login page so that is it for this video that i wanted to show you guys how to create a face recognition app in flutter well well well so you've watched the complete video and let me tell you one thing you are amazing and make sure to type i watch till the end inside comments section below and i'll take your name in my next video as shoutout so i hope to see you guys again in my next video till then peace out even when you feel low you can still go even when you feel slow you can still go even when there's no hope you can still go i never answer the know man i still go go go go go go go go go go go
Info
Channel: Yash Makan
Views: 41,382
Rating: undefined out of 5
Keywords: face recognition, facial recognition, face recognition flutter, flutter, flutter course, flutter tutorial for beginners, flutter tutorial, flutter beginners, flutter beginner, flutter tensorflow, flutter tensorflow lite, flutter tensorflow tutorial, tensorflow, machine learning, machine learning tutorial, face authentication, face authentication flutter, flutter ml, flutter ml kit, firebase ml kit, flutter camera, flutter camera tutorial, flutter camera app
Id: c2JNZ8nxCCU
Channel Id: undefined
Length: 33min 12sec (1992 seconds)
Published: Sun Jun 12 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.