Build Real Time Face Detection With JavaScript

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
do you like robots because I sure do so in today's video I'm going to be using artificial intelligence and face recognition to determine my emotions through my webcam in real-time also if you enjoyed this video make sure to let me know down below so that I can create a part two to this video where I use face recognition in order to determine who is in a picture and display their name next to their face let's get started now in order to accomplish this face recognition we're going to be using a library called face APA jeaious you can see over here on the right and this is a wrapper around tensorflow which is one of the most popular machine learning libraries out there and this is going to allow us to do real-time face detection in the browser and it's really easy to get set up with the first thing that you need to do is download this library there's going to be a link in the description below where you can download this library from also you're going to need to download the models that you'll be using for your face detection when we get to the models section I'm going to list off all the models that we use so that you can download them all so all of the models are going to be available on my github in the source code for this video so you can get them there as well to get started we just want to create a blank HTML page here and inside of this we want to put a video element because this is going to be where we render our webcam where we do our real-time face detection and this is just going to have an ID of video so we can easily access it in the JavaScript and we're just going to give it a width here you can use whatever width and height that you want but in our case I'm going to use 720 and I'm going to use a height of 560 and you want to make sure you just set this to the autoplay and mute it because we don't actually want any sound and you have to make sure you specify a width and a height on here otherwise the face detection will not be able to draw properly onto your webcam now once we have that done I'm just going to add a little bit of basic styling to our sheet so we can come in here and add a style tag and really all I'm going to do is just style the body of this so I'm going to give it no margin make that zero same thing with padding here we're gonna do zero and essentially the reason I'm doing this styling is just to get the webcam is centered in the screen so I'm gonna set the width to 100 view widths the height is going to be 100 view Heights and I'm going to change this to be display flex just to find the content in the center and align the items items in the center and this is just going to put the webcam in the very center of the screen so we can just open this up with live server just to see what we have working with and here we go it's loading up and obviously nothing is going to render for our video because we haven't actually hooked up our webcam into our video yet so let's go into our script here and work on doing that first we need to get that video element so we're gonna create an element here called vo and we can say document dot get element by ID and we gave it an ID of video and this is going to be our video tech then we can create a function called start video we're going to use this to hook up our webcam to our video element so in order to get the webcam we need to use navigator getusermedia and this is going to take an object as the first parameter and this just says what we want to get and we want to get the video elements we're just going to say video is the key and an empty object as the parameter then we're going to have a method here which is going to be stream this is essentially what's coming from our webcam so what's coming from our webcam we want to set as the source of our video so we'll say video source object is going to be equal to that stream and then lastly we have an error function that we can call here so if we do get an error we just want to log that so will say console dot log err oops err just like that and let's make sure it's an error log instead of a normal log and now if we call that function we can just say start video and go into our HTML make sure to include that so up here we're going to include our script tag make sure we set it to defer and we want the source of that to be our script J s now if we save that you should see that it's going to access our video it'll load up right here and this is just a live preview of my webcam it may be slightly delayed but that's just because it's going through the browser as opposed to going directly into my recording software so now once we have that done let's also include our face API library while we're at it so we can do a script here we want to defer this one as well and we want the face API and we want to make sure that this is defined above our normal script so that it gets loaded before we actually run our script and now we can work on actually setting up our script in order to detect my face as opposed to just render my video in order to do that we need to load all of the different models so let's do that at the very top and this is all done asynchronously so we want to use promess alt which is going to run all of these asynchronous calls in parallel which will make it much quicker to execute and in here we just pass an array of all of our promises what you do is you want to call face API nets and then this is going to be where you call all the different models you want in our case we're using the tiny face detector this is just like any normal face detector but it's going to be smaller and quicker so it'll run in real time in the browser as opposed to being very slow and we want to say it load from whoops if I spell it properly load from URI and in here we have a models folder with all of our different models so we're going to just pass that in we'll just say slash models we wanted to do this a couple times for all of our different models so let's copy that down four times and we have our tiny face detector the next one that we have is going to be our face landmark whoops landmark 68 net and this is going to be able to register the different parts of Maya faces post my mouth my eyes my nose etc next thing that we're going to have here is face recognition I can spell that recognition net and this is just going to be able to allow the API to recognize where my face is the box around it and then lastly we're going to have is face expression net and lips expression just like that and what this is going to allow it to do is to recognize when I'm smiling frowning happy sad etc so what we want to do now is how much doll is going to take a dot Ben and after we're done with this we want to call our start video so we can remove this here and now once we're done loading all of our models it's going to start the video over here on the side and it may take a little bit longer because loading these models does take a little bit of time but it's fairly quick then we can actually set up an event listener so we can say video add event listener we want to do is we want to add event listener for when the video starts playing so then when the video starts playing we're going to have our code down here which we can say to recognize our face for now let's just do a simple console dot log and we should log whatever come over here and inspect that and as soon as the video start you see we get that log down here at the bottom which means we know that everything's working so far now we can work on actually setting up the face detection and this is actually incredibly straightforward to do what we want to do is we want to do a set interval so that we can run the code inside of this multiple times in a row we want to make sure it's an asynchronous function because this is going to be an asynchronous library and all we need to do inside of here is get our detection so we're going to just say detections is going to be equal to awaiting our face API or the a face api dot detect all faces and this is going to get all the faces inside of the webcam image every single time that this gets called and we're going to do this for example every 100 milliseconds and what we want to do instead of here is we want to pass the element which in our case is a video element as well as what type of library we're going to use to detect the faces in our case we're using the tiny face API so we'll say face API dot tiny face whoops face detector options and this is just going to be empty because we don't want to pass any custom options because the default is going to work perfectly fine for our scenario then we also say what we want to detect these faces with so we're just going to say with face landmarks and this is going to be for when we actually draw the face on the screen we're going to be able to see these different sized sections so the face landmarks is going to be the different dots that you'll see on my face and then we can say dot with face expressions and this is going to be able to determine whether I'm happy sad angry upset whatever based on just the image that it gets of my face so let's make sure I spell that properly there we go now we could just log out these detections so we're going to say oops console dot log detections just so we can see that if this is working and we can go inspect over here and you can see that we're getting an error immediately and is saying tiny face detector options is not a constructor and then super easy to fix this just needs to be a capital T over here we can save this and now we should be able to actually get our detection is showing up over here so let's make sure there we go and you can see we have a bunch of objects in here and we just have one element in the array because there's only one face currently and this is all the different detection information expressions etc and what we want to do is actually display this on the screen and to do that we're going to be using a canvas element so inside of our index.html here we just want to style this canvas it has to be positioned absolute so that it positions directly over top of our video element and we don't actually need to put the canvas element inside of our HTML because we can do that in our JavaScript so let's do that now we can just say canvas is going to be equal to face API create canvas from media oops media and we want to create it from our video element then what we want to do is just add that canvas to the screen so we're going to say document body that append and this is just going to put this at the very end of our page and since it's positioned absolutely it doesn't really matter where it goes and then what we want to do is get the display size of our current video and this will be so that our canvas can be sized perfectly over our video so this is just going to be an object with a width which is just video dot width and it's going to have a height property which is video dot height now that we have that another way we can actually work on displaying our elements inside the canvas so we want to take our detection x' which we have right here create a new variable which is going to be called resized detection x' we just want to set this to our face api dot resize or results and we want to pass in here the detection is that we have as well as the display size and this is just going to make it so that the boxes that show up around my face are properly sized for the video element that we're using over here as well as for our canvas let's make this a little wider so it's easier to see and now all we need to do is actually just draw this we can say face api draw dot draw detection oops detection and what we do is you pass in the canvas we want to draw onto as well as our resize detections and make sure I spell face API correctly over here let's save that and see how this works it's going to load in my face over here and as you can see it's already got a problem because our canvas isn't being cleared and we have our video element being shown up over top which is definitely not what we want the first thing we can do to fix this is to actually clear our canvas before we draw it so let's remove this console dot log and instead of here right after we get our detection is right after we resize everything and right before we draw we want to take our canvas and we want to get the context the 2d context this is just a two-dimensional canvas and we just want to clear it so we'll stay clear rect and we want to clear it from zero zero and the width is going to be the just the canvas start with and the canvas height for the height this is going to clear the entire canvas also we want to make sure we match our canvas to this display size so we can say face API dot match dimensions oops dot match dimensions and we put it in our canvas as well as our display size make sure we spell dimensions properly and now if we save that we should get much better detection over on the side and if we just give it a second you can see that it's detecting my face and as I it's following me around it also has a number which is how percent sure it is so it's about 90 percent sure that this is a face which is perfect so now we can actually start drawing even more details if we can go into the face API draw again and this time we can draw the landmarks so we can say to our face landmarks this is going to take the canvas as well as the resized results and if we save that woops resized results save that again let it refresh over here and now you'll see that it'll actually draw up some lines and dots on my face based on where the landmarks in my face are I notice that's actually not working and that's just because this should be called resized detections not resized results so let's save that refresh it again and let it do its work and if we wait a second you'll see that it now has all the different face detection it knows where my eyes are my eyebrows my actual face shape my mouths nose etc and lastly if we want to determine if I'm happy sad or whatnot based on this image we can go into the face API here again draw and this time we want to draw the face expressions this is going to take a canvas element as well as it's going to take our resized detections again now if we save that it'll be able to determine my emotion based on just my image alone so you can see that it's about a hundred percent sure that I'm neutral but if I make a surprised face for example it says it's 100% charm surprised and if I look angry it'll say I'm angry and so on which is really impressive and that's all it takes to create this simple face detection algorithm if you want more artificial intelligence and face detection let me know down in the comments below and I'll definitely make videos like that also check out some of my other project-based videos linked over here and subscribe to the channel for more videos just like this thank you very much for watching and have a good day
Info
Channel: Web Dev Simplified
Views: 1,095,852
Rating: undefined out of 5
Keywords: webdevsimplified, face detection, face detection javascript, real time face detection, face recognition, face recognition tutorial, face recognition javascript, face detection app, face recognition app, real time face recognition, webcam face detection, webcam face recognition, html face recognition, html face detection, ai face detection, easy face detection, easy face recognition, face detection ai, javascript ai, javascript, machine learning, artificial intelligence
Id: CVClHLwv-4I
Channel Id: undefined
Length: 12min 41sec (761 seconds)
Published: Tue May 21 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.