Facial Emotion Recognition using Keras Tensorflow | Deep Learning | Python

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right guys this is ashwin here in this video we are going to do the project for facial expression data set so it is also called as facial emotion recognition so this is a popular data set that contains around seven different emotions with a lot of images so we have like seven classes and you can see the number of uh files in each class only in uh discussed we are having like less number of files you can use some class balancing using smart in order to uniform the class distribution but other files seems to be uh well balanced it's around like 4000 so that is fine for us and we also have a test image or the validation image with the corresponding labels so we are going to use this data set for this project i am going to build a multi-class image classification model with cnn now that we explore the data set uh let's jump into the code so this is the notebook i have created if you just go to the data set means uh you just have to create the new notebook so the data will be loaded automatically you can now use this path directly uh in order to consume the data now first step we'll just have some code and first step will be importing modules so import modules now the session is starting in the meantime we will import all the basic modules first so import pandas as pd import numpy as np import os for managing the files import matplot matplotlib dot pi plot as plot i'll also import c bond here c bond and what are the basic modules we have yeah import warnings and we have to set warnings dot filter warnings ignore mat plotlib inline so these are the basic uh modules we have to import like most of the project we are importing like this and this is import c bond as sns import tensorflow as tf and from keras dot models import sequential and this is for the model creation so we are using a deep learning module with neural networks that's why and one more thing we have to import is the layers so from keras dot layers import dense con 2d this is for convolutional 2d dropout dropout flatten max pooling 2d so these are the required models for us and we also import like one more uh module from tqdm dot notebook import tqdm run this now we have imported all the modules so it didn't throw any error so we will move on to the next step so the next step will be loading the data set so load the data set okay now we have to copy the path so here in the data you just expand it so we have until this train just copy the file path and here we will initialize train directory equals paste this so this is for train directory and for test directory similarly we will have it for it so we have got both train and test and also had this slash okay now that's done and we are using neural networks so we can also enable this gpu turn on it will again initialize the session and after this we have to define a function because we have to convert this folders into a complete image path with the corresponding labels we have to do those so def load images or you can say load data set and here i will pass the directory because we have two directories train and test i will again run this modules now image paths equals this labels equals this okay we are just initializing our tool list image paths and the labels now i'm going to iterate this directory so in this directory if you go further you will be having the class labels and inside that you will be having the image file names so we are going to go through that directory now for label in os dot list there of directory now we got the label so if one label is completed means we can say label completed completed maybe at the end we can say now after we got the label we will again iterate further for the file name so for file name in os dot list directory the directory plus the labels okay this is only label i am just concatenating both this train directory with the corresponding label so inside this label it will again go deeper to get the file name and after that we will concatenate all these things now image path match path equals os dot path dot join off directory label and file name image path paths dot append image path and labels dot append label so these are the two things this is for input the whole image path and this is for output label and we are appending everything into the list and finally we will written image paths image paths comma labels so this is the function for converting the folder structure into a csv run this now we have to initialize the data frame train equals pd dot data frame i'll also comment this into data frame now after creating the data frame we will have train okay i just train of image comma train of label equals load images of train directory so this is the directory we are going to pass so this will just return this image path and label and we are just directly creating a column for each one of them and after that we will shuffle the data set so train equals train dot sample of fraction equals 1 so this is shuffle the data set and again when you are doing this you have to reset the index so we can iterate so reset index drop equals true finally we will have the train data set run this okay it's saying it's not defined okay this is load data set run this again now you can see this is the whole image path for the image and this is the label and the entire data set is shuffled so we will get all the different samples for each label now we have to do the same for test so just paste everything we don't need to shuffle the test data set so here test dot head and here also this will be a test directory so run this so everything is loaded for the test data so we are going to use this for the validation purpose now the data set has been loaded successfully we will go for exploratory data analysis exploratory data analysis now here sns dot count plot of d f of sorry this is not d f this is train of label so this is the class distribution happy has so many uh labels and otherwise all the labels are equally or distributed only for discussed uh the labels are uh so less that means the number of samples for our trained data is so less we may have to use some class balancing like smart methods or weighted random and the sampler like that you how to use some methods in order to uniform that i have done it in some other project video maybe you can refer that or i will just leave this up to you now this is the class distribution now after that we will just load a single image for that from fill import image image equals sorry equals image dot open train of image of zero now plot dot i am show image comma c map equals gray now run this the image actually in a grayscale only so that's why i have to uh apply this color map and you can see this is for surprise no yeah this is for a surprise so this is the sample image and you can see the width and height it is around 48 cross of 48 so that is the resolution of the image anyway for all the images that is the resolution now that we have displayed for this and if you want to remove this matplotlib thing just have a semicolon so this will remove it and you can also turn off this axis so we will do that in a separate formation so here i will create the function ok i think we are not going to display it for multiple classes i am going to display everything so to display to display grid of images plot dot figure fixed size equals 25 25 you can also reduce it if you want now files equals train off from the starting index so for that you have to use i lock that means based on the index you are just getting those samples alone so this will be 0 0 colon 25 so i'm just taking the top 25 samples now we are going to display a 5 cross 5 matrix of images so we can see all the different types of classes there for index comma file comma label in files dot ether tuples so this is the formation for data frame i am just hydrating through the data frame that is here now plot dot subplot this will be a phi cross phi matrix and index will be index plus 1 because here index starts from 0 but in subplots we have to start from one that's why i'm adding a plus one here so in this index will be loading the image so image equals load image load image of file this is the entire file path now after that we will convert it into an empi array so image equals np dot array of image now we can just plot the image so plot dot im show image now plot dot title label plot dot access equals off so label means it will just uh display the corresponding label as the title and plot dot axis off means it will just ignore this axis so this will just remove those numbers let's run this okay load image is not defined okay i think from keras dot preprocessing preprocessing dot image import load image run this now run this so you can see uh this is neutral this is surprise this is sad but the resolution is like so uh small like 48 cross 48 maybe we can reduce this to 20 cross 20. now the blurriness is not there so you can see different uh types of classes angry happy happy happy angry fear fear there are various emotions and this is angry okay this is like a sample of images if you want to display uh more images means you can randomly select out some number of images and try to display it like this so this is for exploratory data analysis and after that we have to extract the features of the image so i will call this as feature extraction we just have to convert that into a numpy array and load it and reshape it in the proper format so feature extraction now here i will say extract features images features equals a list for image in tqdm of image sorry this is images so this tqdm will give you a loading bar so we can observe the status of how much it is progressed if you have more number of data means you definitely have to use this at least to track the images or the process now again we have to load the image so image equals load image of image yeah image grayscale equals true because this is a grayscale image and again image equals np dot array again this will be image now features dot append image so this will do this for all the images and it will just add it to the features list and when the process completes we have to convert the features into a numpy array so features equals np dot array features and we also need to reshape the array because we have to feed it into the model for that we have to reshape it so features equals features dot reshape of lin-off features comma 48 comma 48 comma 1 and return features okay why we are reshaping in this shape so first one representing the number of samples so this will give you the number of samples in the whole data set and this is the width and height that is 48 cross 48 and this is the dimension so usually if you have rgb image you will be having like three dimensions but for the grayscale image we have to specify like one dimension if you don't do this the shape will be uh like this the number of features 48 cross 48 it won't have this one so when we are creating the model it will accept this image or else you have to like change the model and why i am doing this even if you are going for some rgb images you just have to change this into a three instead of one so every uh the remaining process will be the same the model uh creation and the training everything remains the same you just have to change this alone okay now this is the extract features function now we are going to extract the features for both train and test so i will call it as train features equals extract features of train of image run this and similarly we are going to do the same thing for test features extract features of test of image so this is somewhat around 28 000 i think our test features have around like 7k i guess let's also run this so this will do the extracting and after that we have to normalize the image normalize the image so for that what i'm going to do is extrane equals train features divided by 255 and x test equals test features divided by 255 so why i'm normalizing is the image pixel values will be in the range of 1 to 255 so we are just converting into uh in the range of 0 to 1 so the neural network can easily capture the information if it's in the normalized range so this is done we have to run this afterwards now again we have to do one more preprocessing step that will be convert label to integer so initially if you see the data set it is in terms of a string we have to convert it into a integer i am going to use label encoding for that so from sklearn dot pre-processing pre-processing import label encoder l e equals label encoder l e dot fit train off label so fit means so it will just see which of the values we have to assign let's say if you are assigning uh integer 0 to surprise means it will store those corresponding mapping so that's why we are just using fit only because we have to transform both the train and test now for transformation i will call it as y train equals l e dot transform transform of train of label and y test equals l e dot transform of test of label so so these are the two label encoder now what this transform does is it will just convert this uh labels or string into an integer for both training and testing so this is the next step still the process is going on i think it will take some time now for the configuration like config i'll create the input shape variable input shape equals we have 48 48 1 and again if you are using rgb images you can just change this to three and output classes output class equals seven so we have like a total of seven different uh output classes that's why and also maybe we can convert this label encoding into like a one hot encoding because this neural network better performs when we are using a one hot encoding maybe we will have a snippet for that both the train and test feature extraction has been completed normalize the image do the label encoding now for this we have to import one more module so from tef dot keras dot utils dot sorry import to categorical to categorical sometimes the keras module won't work for this categorical i don't know in the latest version there has been some bug that's why i'm using tf here okay i can have it as from tensorflow okay now it's working so here i am going to have the same y train equals to categorical of y train num classes num classes equals 7 similarly you have to do the same for y test y test y test now run this run this also okay now all the things are completed so if you just uh display see we are having like seven columns one two three four five six seven so that represents each class so this is the representation we are using for output now we are going to go for model creation model creation okay this will be a long step i guess model equals sequential i'll just quickly uh type everything sequential of now model dot add con2d con2d off 128 comma kernel size so we will create only uh one full layer and after that we will reuse the same information three comma three activation equals relu input shape equals input shape so this is the first layer convolutional layer and after that we will have max polling sometimes uh they may had what you call the batch normalization you can also add it and see whether it is improving the accuracy or not so pool size will be 2 comma 2 sorry 2 and finally we will add a dropout layer we do not want to overfit the data set that's why i'm just having a dropout layer usually uh we can ignore this dropout layer if you have like uh image augmentation if you have implemented uh that may help but we are not doing any kind of augmentation here because this is like a small uh image so i'm just using as it is i'll just have the dropout layer with 40 now we will have this remove this do the same for like we are going to have like full convolutional uh layer like four times okay and we will also adjust the number of filters we are going to use i'll just gradually increase 128 to 56 512 512 so these are the convolutional layers and finally we will just flatten all the things we have done so far so flatten now fully connected layer fully connected layers convolutional layers convolutional [Music] layers and here what i'm gonna do model dot add dense of 512 comma activation equals relu okay this is one dense and we also need a dropout layer just copy this paste it here and again i will have one more dense layer so we will be having like two fully connected layers we just decrease this 256 and last report we can minimize this and we'll be having one more uh dense that is the output layer output layer so model dot add off dense of output class output class comma activation equals soft max output class activation equals softmax so why we are using softmax means so if you are having a multi classification mean so you have to use this uh softmax so that represents like a categorical activation function apart from that uh we have to compile the model so model dot compile off set the optimizer optimizer equals atom is like a baseline you can also customize with a sgd loss will be categorical categorical cross entropy and matrix equals accuracy okay everything is good to go run this okay it's using a tesla p 100 16 gb that's a good graphic card if you run it again it will just remove all those unnecessary things now here we will train the model train the model so i will have history equals model dot fit of x equals x train y equals y train batch size equals 128 epochs equals 100 and validation data validation data equals x test comma y test now let's run this let's see yes it is running we are getting around like 24 percent accuracy now it will keep on increasing that's why we are having like 100 epochs let's see how much it is going for 100 epochs so in the next few lines i'll be adding a few more things so here i will plot the drain val accuracy with loss so accuracy equals history dot history of accuracy now val accuracy equals history dot history of well accuracy epochs equals range of len of accuracy so based on the number of samples here in the accuracy list i will just convert that as an epoch so this function we usually create for a majority of the deep learning projects now plot dot plot epochs accuracy blue and label equals training accuracy that's done now similarly we have to do for validation accuracy here i will have val accuracy red and validation accuracy plot dot title of accuracy graph plot dot legend plot dot figure so this will display the labels at the legend and uh i think we don't need this let's comment it out let's see whether it's having any issues similarly we have to do the same thing for loss so i'm gonna copy the whole thing here so we can easily edit it loss so this will be loss everything will be changed to loss training loss validation loss loss because this will be lost graph okay that's pretty much it finally we will just plot the graph plot dot show okay i think that's pretty much uh done from our side we have to like plot this graph after the training completes currently we are getting around 58 training accuracy uh the epochs around like 34 and the validation accuracy is around 60 so we will try to increase more than 60 i think 60 is fine for multi classification because we are having like seven different classes so usually you can expect this much accuracy only from multi-classification problems if you've gone above more than 60 that is a better metric for us so let's see we already reached 60 so that is fine for us let us wait whether it goes higher or not in the meantime we have to test our prediction results right so test with image data this will be plot the results now testing with the image data i will have some image number so image index equals i didn't import a random function maybe i could have imported that so here i will import random import random run this once okay this will run it at the end and need to get some random image data so random dot randint randint zero comma lenov test so this will choose any index uh from zero to the length of the text sorry test now the image is here we will display the original output so print i will call it as original output colon y test of image image index so this will give us the label i think this won't give us the label we have to use the normal test data frame with the label column test of label of image index so this will give us the original label and for the prediction we have to use like prediction equals model dot predict of x test of image index image index dot reshape again we already told you why we are reshaping so we have to reshape like one sample 48 48 1 so now the reshaping is completed so this will give us the proper format for the model to predict after the prediction i will have another variable called prediction label equals l e dot inverse transform of prediction dot org max so arg max is whichever probability has higher like whichever class has the higher probability it will give you that index using that index i am just converting into a normal label that is like a string format finally we will print print predicted output now here i will use this prediction label now the final thing plot dot i am show off again we have to use the same thing but here the reshaping will be different you don't need this one and this one comma c map equals gray now this is the whole function let's see okay now we have got around like 63 percent validation accuracy let's also check whether we have reached like 64. let's go for 64. well it is still loading maybe you can also increase the number of epochs because we have used so many dropout layers so it will take longer time to um converge at the global minima so try to increase the number of epochs like 200 or yeah 200 is sufficient for this tutorial i'm just going with android itself i hope it will reach that uh 64 accuracy because it's almost saturated at 63 i guess and the train accuracies keep on increasing so we have to wait and see and that's pretty much it maybe we can test with some image data and see how it works okay now the training has been completed here you can see it gone up to like 64. so as i said so if you just increase a few more times means it may reach up to like 67 or even 70 if you just increase the number of epochs or you can also play with the model here so that is also possible now that's done i think everything's ran let's run this okay i think we have to use this figure in order to display that also yeah now it's working so this is the training accuracy uh we can see this uh good fitting here but after like 40 epochs the train accuracy is like keep on increasing up to 70 and the validation accuracy is stagnant around like 63. even in the loss we are seeing the same thing we have to even go further maybe we can increase the number of layers or the number of hidden units in the model so these things you can do but this is the accuracy and loss we are getting now let's jump into the testing with the image data i hope the syntaxes work okay showing some error why should be a 1d array got an array of shape in an empty array instead prediction label red dot or max okay here you just have to add the index because it will written as a list that's why it's showing another error now let's run this again why store object has no attribute prediction label ok this is comma okay now it's showing the original output and the predicted output both are same and here you can see the image and if you want to remove this matte plot layer by access means you can just have a semicolon so this is a wrong prediction if you run this again now you can see this is happy now that is a correct one again let's run this this is also happy i just want a different uh class so this is neutral but it's predicted as happy but even i am saying this woman is like happy only i don't know so this is sad and sad let's do one more time for uh some misprediction okay here the original output is fear but the predicted output as a surprise so yeah that's pretty much it but most of the cases we are predicting it uh correctly you can also see it like multiple times how it's working like how the model is performing for random images you can also use your image obviously for real time purpose you can just try to predict your emotion in real time based on your face and do all these feature extraction steps and predict the emotion what you are expressing in the video maybe i'll just talk about that in later but that's pretty much it guys we have completed the end-to-end project for facial emotion recognition and this is a multi-classification problem and we have solved using uh deep learning that means with the help of neural networks and yeah that's it guys if you like this video hit the like button and don't forget to subscribe the channel for more videos see you guys in the next video
Info
Channel: Hackers Realm
Views: 12,591
Rating: undefined out of 5
Keywords: facial emotion recognition, facial emotion recognition using cnn, facial emotion recognition and detection in python using deep learning, kaggle facial emotion recognition, facial expression recognition using python, facial emotion detection, facial expression detection using deep learning, machine learning, deep learning, facial emotion detection using neural networks, facial expression recognition using keras, artificial intelligence, data science, projects, python, hackers realm
Id: mj-3vzJ4ZVw
Channel Id: undefined
Length: 47min 9sec (2829 seconds)
Published: Mon Mar 21 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.