Batch Normalization (“batch norm”) explained

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] in this video we'll be discussing batch normalization otherwise known as batch norm and how it applies to training and artificial neural network will then see how to implement batch norm and code with Kerris before getting to the details about batch normalization let's quickly first discuss regular normalization techniques generally speaking when training a neural network we want to normalize or standardize our data in some way ahead of time as part of the pre-processing step this is a step where we prepare our data to get it ready for training normalization and standardization both have the same objective of transforming the data to put all the data points on the same scale a typical normalization process consists of scaling the numerical data down to be on a scale from zero to one in a typical standardization process consists of subtracting the mean of the data set from each data point and then dividing the difference by the data sets standard deviation this forces the standardized data to take on a mean of zero and a standard deviation of one in practice this standardization process is often just referred to as normalization as well in general though this all boils down to putting our data on some type of known or standard scale so why do we do this well if we didn't normalize our data in some way you can imagine that we may have some numerical data points in our data set that might be very high and others that might be very low for example say we have data on the number of miles individuals have driven a car over the last five years then we may have someone who's driven a hundred thousand miles total and we may have someone else who's only driven a thousand miles total this data has a relatively wide range and isn't necessarily on the same scale additionally each one of the features for each of our samples could vary widely as well if we have one feature which corresponds to an individual's age and then another feature corresponding to the number of miles that that individual has driven a car over the last five years then again we see that these two pieces of data age and miles driven will not be on the same scale the larger data points in these non-normalized datasets can cause instability in neural networks because the relatively large inputs can cascade down through the layers in the network which may cause imbalance gradients which may therefore cause the famous exploding gradient problem we may cover this particular problem in another video but for now understand that this imbalanced non-normalized data may cause problems with our network that make it drastically harder to Train additionally non-normalized data can significantly decrease our training speed when we normalize our inputs however we put all of our data on the same scale and attempts to increase training speed as well as avoid the problem we just discussed because we won't have this relatively wide range between data points any longer once we've normalized the data okay so this is good but there's another problem that can arise even with normalized data so from our previous video on how a neural network learns we know how the weights in our model become updated over each epoch during training via the process of stochastic gradient descent or SGD so what if during training one of the weights ends up becoming drastically larger than the other weights well this large weight will then cause the output from its corresponding neuron to be extremely large and this imbalance will again continue to cascade through the neural network causing instability this is where batch normalization comes into play batch norm is applied to layers that you choose to apply it to within your network when applying batch norm to a layer the first thing the batch norm does is normalize the output from the activation function recall from our video on activation functions that the output from a layer is passed to an activation function which transforms the output in some way depending on the function itself before being passed to the next layer as input after normalizing the output from the activation function bash norm then multiplies this normalized output by some arbitrary parameter and then adds another arbitrary parameter to this resulting product this calculation with the two arbitrary parameters sets a new standard deviation and mean for the data these four parameters consisting of the mean the standard deviation and the two arbitrarily set parameters are all trainable meaning that they too will become optimized during the training process this process makes it so that the weights within the network don't become imbalance with extremely high or low values since the normalization is included in the gradient process this addition of batch norm to our model can greatly increase the speed in which training occurs and reduce the ability of outlying large weights that will over influence the training process so when we spoke earlier about normalizing our input data in the pre-processing step before training occurs we understand that this normalization happens to the data before being passed to the input layer now with batch norm we can normalize the output data from the activation functions for individual layers with our model as well so we have normalized data coming in and we also have normalized data within the model itself now everything we just mentioned about the batch normalization process occurs on a per batch basis hence the name batch norm these batches are determined by the batch size you set when you train your model so if you're not yet familiar with training batches or batch size check out my video that covers this topic so now that we have an understanding of batch norm let's look at how we can add batch norm to a model and code using Kerris so I'm here in my Jupiter notebook and I've just copied the code for a model that we've built in a previous video so we have a model with two hidden layers with 16 and 32 nodes respectively both using rel you as their activation functions and then an output layer with to output categories using the softmax activation function the only difference here is this line between the last hidden layer and the output layer this is how you specify batch normalization and caris following the layer for which you want the activation output normalized you specify a batch normalization layer which is what we have here to do this you first need to import batch normalization from Charis as shown in this cell now the only parameter that I'm specifying here is the axis parameter and that's just to specify the axis for the data that should be normalized which is typically the features axis there are several other parameters you can optionally specify including two called beta initializer and gamma initializer these are the initializers for the arbitrarily set parameters that we mentioned when we were describing how batch norm works these are set by default to zero and one by Kerris but you can optionally change these and set them here along with several other optionally specified parameters as well and that's really all there is to it for implementing batch norm and caris so I hope in addition to this implementation that you also now understand what batch norm is how it works and why it makes sense to apply it to a neural network and I hope you found this video helpful if you did please like the video subscribe suggest and comment and thanks for watching [Music]
Info
Channel: deeplizard
Views: 125,438
Rating: 4.9554157 out of 5
Keywords: Keras, deep learning, machine learning, artificial neural network, neural network, neural net, AI, artificial intelligence, Theano, Tensorflow, tutorial, Python, supervised learning, unsupervised learning, Sequential model, transfer learning, image classification, convolutional neural network, CNN, categorical crossentropy, relu, activation function, stochastic gradient descent, educational, education, fine-tune, data augmentation, autoencoders, clustering, batch normalization
Id: dXB-KQYkzNU
Channel Id: undefined
Length: 7min 31sec (451 seconds)
Published: Thu Jan 18 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.