Anime Face Generation using DCGAN | Keras Tensorflow | Deep Learning | Python

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hey everyone this is Ashwin here in this video we are going to see how to generate new anime faces using DC Gan so Gan is a generative adversarial Network so this is a neural network model which generates new faces based on the training data set so here we have anime faces data set uh usually this project will be done on like regular faces I am a fan of anime so I wanted to do it on animation images so we have around 21 000 files of anime images which is like 64 cross 64. as you can see these are the samples of images we could also see uh with the plot in the code later so this images will be used as a training data set and with the help of Gan we will be generating new anime faces based on the training images so we are just training with anime faces so the generator model also will be giving out anime faces only I will also explain how Gan Works throughout the code so so in the high level we will be creating two models one is generator another one is discriminator generator will try to produce images like real images compared to this and the discriminator will just uh tell you whether this image is a real one or a fake one so these are the two models and will be like training both the models and uh finally we'll be using generator to produce new image so that's the high level uh overview of this project let's create a new notebook and start with the project so the notebook is loading I'll just delete this cell and let's name it as generate anime faces DC again so this is like a basics of Canon this is like a first model uh they have announced in the paper and we will also see uh various types of Gan in the future videos so it will be very cool trust me so this will be built on top of Keras and tensorflow so Keras tensorflow I think that's enough now so if you click on this button you will be getting all these parameters so this is our data set uh we will be uh getting the images from here and this is the accelerator when we are actually done with the coding we will be using this GPU to train the model as this is a neural network and just turn on the internet uh just to download any other modules apart from that other things is not necessary for now so we could start with the project let me just increase the code cell a little so I think this would be clear now let's import the modules import modules okay the session is starting it will take some time first we will import the base modules import OS import numpy as NP import matplotlib Dot dot Pi plot as plot import warnings import sorry from tqdm dot notebook import tqdm so these are the basic modules we'll be using and apart from that we will be importing a few tensorflow modules so from tensorflow import Keras apart from that you will also import tensorflow as TF and from tensorflow tensorflow Dot Keras Dot preprocessing preprocessing dot image so we have like pre-process some of the images so import load image load image and image to array sorry array to image yeah I think this is done and we also need to import Optimizer loss and necessary models for creating the models so from tensorflow dot Keras dot models import sequential and model so this is one and from tensorflow dot Keras import layers and from tensorflow.keras dot optimizer import Adam so if you don't know how to set up the optimizer like SGD or anything I just always go with Adam this is like a best uh uh Optimizer for the base case if you are like fine tune the model more and uh play around with the parameters means you can go for SGD with momentum and from tensorflow.keras loss import binary cross entropy binary cross entropy okay finally warnings Dot filter warnings we'll just ignore all the warnings let's run this so we have imported all the necessary modules and load the files we'll just load the images let's create some code as well okay now we have to create the base directory so base directory we'll just specify the path you can get the path here so data paste it and data okay so this is the path of the base directory and after this we will create a list containing all the images I will call it as image Parts as a list and for image in OS Dot list thereof base directory now image path equals OS dot path dot join off base directory and the image okay this will just iterate the directory so it will contain like all the images so this will contain like the image name maybe I can also say it as image name image name and finally I will just open this to the image path list dot append image but okay run this so it will contain like all the images so when I try this project uh I found out like one file have like another folder or something so we have to remove that so image paths dot remove off just have the base directory data and data so this will just remove that element you can also see the image paths we have got so these are the images with the complete path and uh I will also get the length of image paths and after removing this we'll be getting see we have deleted that data successfully so we have 21551 images which has been mentioned in the uh description of the data set so that's done we could delete this okay now let's visualize the images visualize the image data set okay to display grid of images so this will be remove unnecessary file load all the complete image paths to the list okay now let's plot it so plot Dot figure off fixed size fixed size equals I can have around 20 cross 20 we could also adjust the size later based on the image plot it is showing and I will temporarily take few images so temp images equals image paths of 49 so I'm gonna display a plot of 7 cross 7 I will still it in the bracket 7 cross 7 so 7 cross 7 is 49 so 49 images I'm just taking and I will also initialize index equals one now for image path in temp images temp images now plot dot subplot of we have to define the grid 7 cross 7 and the index so here the index is starting from 1. now load the image load the image so image equals load image and the image path so this module is imported using a Keras image path and convert to numpy array now image equals NP dot array image show the image plot dot I am show off image and we will just set out few things plot dot access off so it won't display some units on the axis on like X and Y and we will also increment the index increment the index for next image so index plus equals 1. let's run this okay now you can clearly see the uh faces of Anime characters I have displayed seven cross seven okay some of them are not faces uh we cannot uh get these images so it is fine so overall all of them are like uh phases of an anime character so this is fine and by the way I watched like most of the anime series anyone you name it so my first anime is like death note if you guys also watch anime uh mention that uh first anime you watched in the comments maybe I'll just create another video based on this uh anime characters and we will just create some projects on that let's see okay now we have done the visualization we will go to the next step so now we have to pre-process the images preprocess image okay Now train images equals I am just using list list comprehension so NP of RA just leave it for now so for path in TQ DM of image path so I'm just iterating over the image path list and I'm loading the image and converting to numpy array loading the image with the path and I'm converting to numpy array load the image and convert to numpy array okay once this is done I'll just convert everything to numpy array so train images equals NP Dot array of train images run this yeah now it is working okay so it could take a while for this to load in the meantime we'll just check the shape at the end so train images train images dot shape so this will give us the overall shape I think anyway we will be running all these um with the GPU mission uh it is fine we will just keep on running we are not like losing our money so after the shape we will just reshape it once again so instead of this we will just uh take one image and get the shape and after that train images equals train images Dot reshape off train images dot shape of zero so this will give the length of the train images that is number of images in the whole data set and we have to reshape it to 64. cross 64. comma 3. so if you have like a different resolution of images if you want to train it means just mention the resolution here so this is the width and height and this is the number of channels so this is a RGB image that's why we have three channels if we have grayscale means we will just mention it as one and as type of float 32 so this will just reshape the numpy array and after that uh reshape the array normalize the images so we have to normalize the images in the range of minus one to one so usually when we do normalization it will be in the range of 0 to 1 so we will just divide it by 255 but in the generator model at the end the activation layer is a tan H so it will give either it will give values in the range of minus one to one so we have to change the image output accordingly so train images equals train images minus 127.5 divided by 120 seven point five so first we are negating it by 127.5 so if the pixel values are in the range of 0 to 255 so this will just make it uh in the range of minus 127.5 to plus 125 7.5 and we are dividing it by 127.5 so it will convert this in the range of like minus one to one so we will also check one example after this so train images of 0 so we will just see some values so after this we have to create the generator and discriminator models create generator and discriminator I will also explain uh why we need both two models and what is the function of each model so we will check that after running these code Snippets as well in the meantime I'll just initialize few things latent dimension for random noise so I will call it as latent Dimension so I'm currently setting it as 100 so this is like a 100 Dimension so the random noise initially we are going to generate based on this Dimension only so if you reduce it to like 10 means it will the variety of images it it will generate will be reduced and if you increase the latent Dimension let's say uh 150 or something like that means it will give variety of images So based on the latent Dimension the image results we are going to produce will be differing so this is one important parameter and weight initialization so this will be used as per the paper so wait initializer so this will be used in the model okay this is done let's see the shape okay so 64 cross 64 and three channels let's do this as well run this okay now normalize so now you can see uh some of the ranges you can see my minus 1 and you can also see uh some values in the positive side like 0.5 0.45 and there will be somewhere there will be one so this is like a sample of values after normalization okay now we will resume now the weight in it equals TF Dot Keras Dot initializers I have imported Keras as well initializers let me just check okay given some suggestion any share license Dot random normal I will set the mean as 0 .0 and standard deviation standard deviation will be 0.02 so these are the parameters that is specified in the paper and the number of channels for the image number of channels of the image so channels sequels 3 so if you are using grayscale for gray scale keep it as one okay now that is done let's create the generator generate our model now let's create the generator and after that I will explain how the model works so model equals uh anyway we will be starting with sequential so I will specify the name as generator because we will be having like two models and first we will have 1D random noise model dot add off layers Dot dense you can also specifically import these ah layers but I'll just call it from the layers module this ISL so I will have eight multiplies 8 multiplies uh 512 so this is the random noise we are generating and I will specify the input dimension input okay it's not showing input Dimension equals latent dimension so this is the input like the 100 Dimension and from that input it will just uh go with this number of units and after this usually after each layer we will be having batch normalization and relu so let's add that model dot add off layers dot batch normalization and followed by activation layer so layers Dot relu okay so this will be the convention for this model so this is the first layer and second layer will be uh we'll be converting this one dimension to three dimension because we have like three channels convert one D to 3D now model dot add off layers Dot reshape off we have to specify the target chip that will be 8 comma 8 comma Phi 12. so initially it will be like 8 cross 8 and uh in the progression of the layers we will be keep on up sampling the uh image to get the final resolution of 64 or 64. okay now uh the first thing is done let's add more layers so up sample to 16 Cross 16 now here we are going to use convolutional 2D transpose followed by batch normalization and reload so model Dot add off layers dot con 2D transpose of 256 we will have kernel of 4 cross 4 so it will be used for up sampling and strides equals to comma 2 followed by padding equals same and lastly kernel initializer equals weight in it so this is the weight in it where we have specified usually it will be like random weights but for here we'll be uh having the weights in the range of minus 0.022 plus 0.02 so this is the con 2D and after that we will just copy this to so this is another layer we will just keep on up sampling the uh image or input until we reach the final result so just copy this and paste it two times so this will be 32 and 32 so this will be 128 4 cross 4 everything Remains the Same and this will be 64. and here it will be 64 cross 64. and again everything Remains the Same so once we have reached 64 cross 64 we will have the final quantity with the activation layer of tanh so model dot add off layers Dot com 2D of the number of channels we have specified right so channels comma 4 cross 4 and padding it will remain the same and activation so this will be tan H remember we have normalized the images in the range of minus one two plus one so this tan H uh will give you the results in the range of minus 1 to plus one so this is the whole generator uh model so initially it will just create random noise based on this units and we are just reshaping that into like uh one dimensional to three dimensional and from there we are keep on up sampling using uh Contour 2D and when we reach 64 uh cross 64 which is the target size we will use the final convolutional layer with the activation of tan H so this will give us the final image so I will name the model as generator so I'll just assign a new name and Generator dot summary okay let's run this okay now we have successfully created the generator model and the parameters you have to change it according to the image size you are using if you are using like odd number of images means you can change the kernel size as well maybe I could uh create a separate video on how to create a convolutional neural network based on the images but usually just go with some paper usually that have like a very good configuration of the whole network now this is done let's create discriminator model discriminator model I already said generator model is used to generate images from random noise discriminator model is used to classify images whether it is real or fake maybe I'll just have the explanation as well generator model will create new images from the training similar to training data similar to training data from random noise okay now for the discriminator discriminator model will classify the image from the generator to check whether it is real or fake images so why the discriminator have to do this So based on the discriminator only we are training the generator so the objective here is we have to drain the generator model until we fool the discriminator so when we create the new image from the generator the discriminator should accept that uh New Image as a real image so until that we will just keep on training these models and we will also periodically check the results because usually from other classes we will usually have some validation data and we will usually have some classes and we will keep on checking this course but here we will just have losses and we have to check the quality by seeing the output from the generator by using our eyes so there are other metrics that we could use but I could cover that in the future videos of can but for now we will just train these two models and see the output directly and uh in in for the images now let's create the discriminator model again model equals sequential of name equals discriminator here we will just specify the input shape so that will be 64 64 and 3 because here we will just pass it as an image it's like a regular conventional uh neural network and we will have Alpha equals 0.2 so this is used in leaky relu so that is a activation layer that is used for discriminator model now let's create the some layers create corn layers now model Dot add of layers Dot con 2D so here just we are just using quantity we are not going to do any transpose so here we will be down sampling the images so 64 I will have kernel 4 cross 4 and strides will be 2 2 okay I missed one brackets I've just tried padding is there padding equals same and lastly we will pass the input shape only for the first layer input shape equals input shape now here again we will be having batch normalization model dot add layers Dot batch normalization and finally we will have the activation layer layers Dot leaky Licky relu and Alpha equals Alpha so this is the first layer and we will keep on up sample down sampling the images so this will be 128 and this will be 128 and finally I will just flatten everything model dot add layers dot flatten add a Dropout layer layers Dot dropout with zero point three you can set the range of like 0.3 or 0.4 and we also should not uh uh train a good discriminator because uh if you are training like a strict discriminator means the generator uh won't learn anything so you have to be you have to create like a lenient uh discriminator so you can read more about how to uh create this kind of a discriminator for uh effectively train the generator model and finally we will have the output class so output class model dot add off layers Dot dense of one activation will be sigmoid activation sigmoid so this is the discriminator model so it will just uh classify whether it is whether the image from the generator is real or fake image now I will change the name discriminator discriminator it's like a Terminator but yeah discriminator and discriminator Dot summary let's run this okay everything seems to be good now after this this will be like an important step and it will be like a long step for you guys uh so create DC again so here we will be combining both the generator and discriminator and we will be overriding the uh Keras model class so we have to like uh train both the models at the same time and update the uh weights with customized loss functions so I'll be adding everything inside a Class overriding A few things let's go into this in detail okay class DC again and we'll be having Keras model okay in the initialization we will have the basic uh details So Def so this is the Constructor self generator discriminator I hope I spell everything correctly and later dimension so we have to call the super don't worry about these things these are like a basic initialization you just focus on the upcoming parts so these are like initializations we have to do self Dot generator equals generator self Dot discriminator so these are the models we have created before we will just pass it through this class and discriminator self Dot latent dimension latent dimension and finally we will be having loss metrics for generator and discriminator self Dot G loss metric equals Keras Dot Matrix Dot mean name will be G loss we have to do the same for discriminator self dot d loss metric metric equals Keras Dot Matrix Dot Main name will be D loss okay these are the initializations after that we will have Matrix so the it's like an override at property Def Matrix of self here we will return those two metrics uh discriminator and Generator so self Dot G loss metric self Dot D loss metric so both Matrix will be returned while training the model and for we will usually do the model dot compile right with the optimizer and uh the loss function we have to override that function as well def compile self we have to pass the parameters so G optimizer Optimizer and D optimizer loss function so super DC gan comma self dot compile self Dot D optimizer Optimizer equals okay first we will have the G so G optimizer self dot d optimizer D optimizer okay and serve dot loss function loss function okay now the compilation is also done so after this will be the main part which is the training steps So Def train step off self comma real images so this will be like a train data we'll be passing to the model so first we will get the batch size get batch size from the data so batch size equals TF Dot shape of real images of zero so this will give you the number of images that is present in the batch so this that will give the bad size generate random noise as I said before we have to generate a random noise so I will call it as random noise equals TF Dot random Dot normal we have to pass the shape so shape equals we have to pass the batch size or it's not suggesting okay batch size and the latent dimension latent dimension so this will generate the random floating Point values so this will be like a starting point so based on the batch size so it will create a random noise for let's say batch size is 32 means 32 images and the length will be 100 so because we have initialized latent Dimension as hundred so after creating this we will train the discriminator first train the discriminator with real and fake images so real images it will be label one and fake images will be label 0. okay now with TF Dot gradient tape as tape compute loss on real image so bread real equals self Dot discriminator of real images training training equals true so this is the training step so I'm just uh passing the uh images directly to the discriminator and uh I'm getting the predictions uh from it and based on that we will compute the losses generate a real image labels uh we didn't label the images yet so we are going to do it now for real real images we will label it as one so real labels equals TF dot once of batch size it will just uh create tensors with the number of batch size and one so it will just uh generate once with this dimensions so this is uh real labels and there is uh one more technique called one-sided label smoothing I will apply it here so real labels equals plus equals 0.5 sorry 0.05 multiplies by t f Dot random Dot uniform of TF dot shape of real labels real labels it's very slow I guess so real labels so this will generate values in the range of 0 to 1 with the shape of the real labels so it will almost equal to the batch size and we are just multiplying it by 0.05 to apply the label smoothing so this is one of the technique that will improve the model and finally we will compute loss on the real images so d loss real equals self dot loss function of real labels real labels and prediction rail okay now this is done for real images let's do the same for fake images compute loss on fake images so to get fake images we will be getting the output from the generator so fake images equals self Dot generator of the random noise we have generated before now we will get prediction fake equals again self Dot self Dot discriminator of fake images and the same argument as it is training equals true and we will generate fake labels generate fake labels so fake labels equals TF dot zeros as I said for fake it is zeros with the same batch size and one and we don't need to do any kind of smoothing here finally we will compute d loss fake equals self Dot loss function of fake labels fake labels and pretty okay this will compute the loss function loss for real images and fake images finally total discriminator loss equals delos equals delos real Plus D loss fake divided by 2. so this will be the total loss of the discriminator so usually when you are training the conventional layer you don't do all these things you just specify binary cross entropy or some loss function and it will automatically do all these things but here we have to customize few things because the two models are kind of interconnected with each other and both of them needs to be trained at the same time now we have to compute the gradients compute describe me discriminator gradients gradients equals tape Dot gradient of D loss we have computed the loss and we currently we are just finding the gradients so self Dot discriminator Dot trainable okay it's not suggesting any so trainable variables so this is common for all the models if some layers are freezed means uh we won't be updating the gradients uh to that variables so we'll just uh get calculate the gradients for the trainable variables and we will update the gradients so this is the classic uh deep learning uh framework and we are just going step by step so if you completed this means you can like work on any complex model now let's update it d optimizer Dot apply it's not suggesting anything apply gradients of zip comma grads comma self dot I'll just copy the whole thing I'll just update the gradients to the trainable variables okay now the discriminator portion is done now let's train the generator trying the generator model okay so here we will just create the labels TF dot once again the batch size comma one uh we we have to train the generator uh to create and uh to confuse the discriminator that the generated images are like real images so for that only we are creating the labels width TF Dot gradient tape as tape first we will get the fake images from the generator self dot generator generator I will pass the random noise that was generated before and here also training is true and I will get labels from the discriminator generate fake images from generator and here generate class C5 images as real or fake so self Dot discriminator discriminator I'll just pass the fake images that we have generated from the generator here training equals true and finally we will get the loss function compute loss so G loss equals self Dot loss function of labels labels comma perfect so from the labels we have created uh it will just uh get the loss function here so as I said generator once discriminator discriminator to think that fake images are real so here we don't update the uh weights on the discriminator we will just update it on the generator so compute gradients gradients equals tape dot gradient of G loss G loss of self Dot generator Dot trainable variables now we will update the gradients it's a lot of typing for this project so self dot g optimizer G Optimizer dot apply gradients and again zip off okay this will be gradients gradients and here also gradients self Dot generator Dot trainable variables okay now we have updated the gradients for the generator now finally we will update the states update States for both models self Dot D loss metric dot update state D loss and self Dot G loss metric dot update state of G loss so both the losses we already uh calculated before and finally we will return written D loss self dot d loss metric dot result and G loss self Dot G loss metric dot result okay that's pretty much it for the DC Gan class gosh it's like a big class we have created uh we have to like modify so many things because of this uh gan uh even in future projects I think uh we have to do a lot of this uh maybe we could reuse some of this instead of creating everything from scratch so this is the first video I want you guys to understand what's happening maybe I'll just uh go through the function again so initially we are just uh in the Constructor we are initializing everything like the models and uh the metrics and this will be uh called when we are calling like model dot compile with the optimizer and the loss function so the important part is the training step first we are getting the batch size based on the batch of images we are passing and we are generating a random noise from the uh latent Dimension so here it is like 100 first we are training the discriminator first I'm Computing loss for the real images so with the real images uh we are just getting the prediction and we are just created some real labels which is one with the help of TF dot once and I have applied a label smoothing label smoothing here so this will give us some improvement and we are Computing the loss with the real labels and the prediction of real and we are doing the same uh for fake images uh the labels will be zero and we don't do label smoothing here because uh it doesn't matter and we are calculating the total discriminator loss from both the losses we have got and finally we are upgrading the gradients for the discriminator and in the second part we are training the generator so we are training the generator to make the discriminator to think that the fake images are real so we have to train the generator and tell the fake images will be considered as a real by the discriminator model so first we are getting the fake images from the generator with the random noise we have generated before here and we are classifying the images whether it is fake or not based on the labels and finally we are getting the loss function so and again based on the loss we are Computing the gradients and we are updating the generator model and finally we are updating the uh our loss Matrix and returning it so this is the whole workflow of how the training is done if you already seen a pytorch model creation means all the steps we will be doing like step by step like updating the gradients and uh going back and calculating the gradients and everything so now this is done we almost reached like 70 to 75 percent of the project completion uh we just need to like create few more things in the meantime we will enable the accelerator GPU so turn on GPU so okay I have to run some cells I will just uh run all okay I think we don't need the markdown and after this uh again as I said for each Epoch or some epochs we have to print or plot some images plot some images means we have to plot the predict prediction of generated image just to see how the quality is improving so we will create another class class DC gan monitor Keras Dot callbacks Dot callback so it is running so I have to type it now again we have to initialize the Constructor in it self so we will pass num images equals I will set it as like uh 4 by default and latent dimension will be 100 these are like default parameters we will initialize the same self Dot num images equals num images now self Dot latent dimension equals latent dimension again we have to like create a random noise uh for generating images create random noise for generating images so self Dot noise equals TF Dot random dot normal of how many images we are going to create maybe I will just say um Phi cross Phi 5 cross 5 means total will be a 25 and the latent dimension latent Dimension so this is the initialization and Def on Epoch end so after each Epoch we have to do this on Epoch end of self comma Epoch and logs equals none so you can also use this for like every Epoch so for now I'll just say epoch modulus 2 for every two epochs I will display the image or we can just uh go as it is I will I will say G image equals self Dot model Dot generator of self dot noise so this is the input and G image equals G image multiplies 127.5 uh we are just denormalizing the image as I said uh the result will be in the range of -1 2 plus 1 so we are just multiplying it by uh uh 127.5 and we are adding 127.5 so this will just make the images and the range of 0 to 255 I will also say D nor Mala is the image and generate the image from noise after that I'll just convert to numpy so G image Dot numpy now fig equals plot Dot figure fixed size equals I will say 8 comma eight I think that is reasonable and for I in range of self Dot num images So based on the number of images we will just iterate this so plot dot subplot we will quickly type this because we already done this uh Phi cross 5 and I plus 1 this will be the index image equals array to image so we already loaded this function as well Ray to image G image of I and plot dot IM show of image and we have to set the access to off and finally plot dot show so this will just uh display the image and if you want to save the image I I will just leave the snippet in the comment you can uncomment it if you want to save the image so save fake equals epoch underscore colon 0 3 D dot PNG Dot format of Epoch so this will just uh save the image based on the epoch number and that's pretty much it uh one last function is pending def on train end on train end self and logs equals none we will just save the model so self dot model Dot generator generator Dot save generator dot H5 so you can reuse the model later for generating new enemy faces I think all the cells have run successfully we'll also run this now let's quickly initialize the model so DC Gan equals DC Gan of first we will have to pass the generator generator discriminator discriminator latent dimension latent dimension equals latent dimension you can also pass with the parameters so sometimes if you just confuse between generator and discriminator you will get an error so I'll just pass with the parameter run this let's compile the model I will also set the learning rates so d l r equals 0.001 and G LR equals 0.0003 I'm just setting uh discriminator with less learning rate because uh if we quickly train the discriminator the generator uh won't learn because the discriminator will learn everything and uh it will just suppress the generator to uh train better so always train the generator faster and train the discriminator much slower now DC gan DC Gan dot compile of so Geo optimizer so G Optimizer equals Adam of learning rate equals g learning rate and beta 1 equals 0.5 I'll just do the same for D optimizer the optimizer requests Adam and this will be DLR and finally we will pass the last function loss function equals binary cross entropy thank you okay so just this is a function okay we have compiled the model now let's mention and epox equals I will have it as 50. now let's train the model so DC Gan dot fit I already have like train images epochs equals any box and a box callbacks equals DC Gan monitor we have created that before DC Gan monitor of we can just leave it as it is because we have specified num images as four but yeah we could specify it as 25 so you can also pass some parameters like if you want to modify the images I'll just leave it as it is let's do the training I hope this works without any errors okay yeah now the training starts and I can see the loss is getting updated here okay array image is not defined this is ra2 image see sometimes this will happen okay again I will compile everything I don't know how many issues will come I will also check uh everything is like have set it correctly so we have come a long way to creating like multiple models and uh combining all the multiple models to uh train it as a one and uh yeah this is great let's see whether it is generating something okay now it's uh generating something so these are like a random noise so without training anything uh this is what you get it's like some kind of uh what you say some kind of like bacteria or something over the period of time we will see some improvement so we have to like wait for a while and see so I've been training this uh for a while and the loss has been like uh exploding so the discriminator loss is going like minus uh 300 and uh generator loss is also going like 240. still the images are like producing and uh it is uh improved compared to the start we have uh seen so this is the start um so I think uh this is due to in the generator we have added a batch normalization just uh comment this part alone and uh we'll just rerun the model after uh doing the commenting part so after this the model is giving like a loss in positive only so the training speed also got increased so let's rerun this after uh doing the commentation just comment this part alone so I have started uh running all the cells so let's wait for uh sometime and uh we'll see how it is getting updated uh we will compare the images from the like first Epoch to the last Epoch for a general comparison and how the results are improving over the period of time so uh we have been uh training this for a while so I have trained the model for 50 epochs and this is the first uh generated image from the model as you can see it's just uh some random noise and it didn't predict any faces at all so this is the first phase uh this is how you uh feed the input this is almost similar to the input we are giving and progressively uh on each Epoch it's keep on improving so here we could see some faces and um over the period it is keep on improving so I can see the eyes are getting improved and the facial structure is also getting improvised and even in the 11th Epoch itself we are getting some good uh faces overall and when we reach like more epochs so now you can clearly see the difference so we are not getting some uh distorted pixels in the image everything is smooth and we can see uh the whole face so this one is good but this one is like a uh Tokyo girl character so yeah this one is also good one red eye and one blue eye and let's go down further now you can see at 31st apoc we are getting some different uh characters so for each uh generated image we are getting like a variety of characters that's why we had set the latent Dimension as hundred uh if we set it as like 10 means the variety will be like less um and if you set it like 130 or 150 you will be getting more uh Variety in the images and the clarity of the images was also getting increased over the epochs let's go to the last few images so here you can see the hair strands of the image that is really great and we almost reach the final one so this picture and all it is uh really good this picture and this one and all the facial structure is really good and this is the final image and as you can see um we can like train further but this is the maximum we could get like for the past 20 epochs we are getting some pretty uh good results from DC can and if you try out other Gans in the future means so it will definitely uh improve this quality of the image we are getting so we will definitely uh see that later and uh as you can see uh just because of that uh one uh change in the model uh we have like uh improved the training quality of the model so make sure you set like right parameters for everything in order to get better results even if you make like a slight mistake it will give you some low quality model now that's pretty much it guys if you want to generate uh images uh from the scratch we already done that part so this is the generation so I can also like uh copy this and uh we will get some thing from this so maybe I'll just copy this whole function and now modify it in a new cell so I will call this as generate new image new enemy image so we'll just paste this now we already have these things I'll just remove these things I need some noise I'll just create only one image with the latent dimension of 100 now I will generate image from the noise so this will give us the image and here I'll also delete this so right to image this will be zero and uh yeah you could save the plots as well let's run this okay I think I have used self um this will be DC again this again dot generator okay this self is also we have to remove okay okay now you can uh clearly see the uh image so this is the image each time if you run this means it will give you a new images so you can see the quality of the images as well so let's just keep on trying until we get some good damage so this is like somewhat good for uh 64 cross 64. if you just uh adjust the figure size it will be much clear for you maybe I'll just take that snippet as well so this is the plot I just updated here I will have three or two cross two okay now we are getting like a smaller image and uh we can clearly see the image as well without any Distortion let's have this one okay yeah I think this is the ideal size let's run this for a few times and see whether we are getting some yeah we got some good uh faces before and now we are getting some faces but it's like a somewhat messed up so this uh I think this face is somewhat uh good we have like blonde hair red and blue eyes with the face so this is good uh you can also like uh uh generate some random numbers or you can specify some numbers in this uh Dimension and uh you can feed that as an input as well so that's pretty much it guys so this is how we could generate our faces from uh DC again so we will be uh working out a few more Gan models uh with other data sets as well if you want to continue with the animation data sets uh please let me know in the comments apart from that if you have any queries uh just leave a comment below I will definitely uh answer that that's pretty much it guys if you like this video hit the like button and don't forget to subscribe the channel for more videos like this see you guys in the next video
Info
Channel: Hackers Realm
Views: 6,017
Rating: undefined out of 5
Keywords: anime face generation using dcgan, keras tensorflow, dcgan, gan, deep learning, anime face dataset, generate anime faces using dcgan data science project, machine learning, artificial intelligence, data science, project, python, hackers realm, anime, dcgan from scratch, AI generated anime faces, anime face generation, keras tensorflow tutorial, dcgan tutorial, dcgan tensorflow, gan tutorial tensorflow, python programming for image generation, generative adversarial network
Id: HxD-M-jTmEA
Channel Id: undefined
Length: 84min 46sec (5086 seconds)
Published: Mon Feb 20 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.