PyTorch Crash Course - Getting Started with Deep Learning

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome everyone to this pie touch crash course this course should teach you everything you need to know to get started with pytorch and i'll not only show you how to code a simple neural network we also have a look at some of the underlying basics because i really want to make sure that you have a great understanding of how the framework works and if you follow this tutorial then you should have a great foundation and should be able to apply the concepts to all your own deep learning projects now the prerequisites you need is just some decent python skills however i will not explain deep learning concepts like back propagation or how a neural network works so if you want to learn more about this you can check out our deep learning explain series that you can find on our channel i will put the link in the description below together with some more helpful resources for you and now without further ado let's get started so first of all if you want to install it on your machine you can go to pytorch.org and then here select your configuration your operating system package manager your cuda version if you have gpu support or only cpu and then grab this command and run it in your terminal or the easiest way to get started with it is just use a google collap here you also get a free gpu so in my opinion this is the easiest way for you to get started and to follow me here so i recommend to just use this and then here you can also select runtime and change this to a gpu like i did here and now first of all let's go over the overview what we will learn today so i divided this course into five chapters first we have a look at tensor basics then we have a look at the autograd in pi charge to compute gradients then we learn how a typical training loop looks like with a pie charge model loss and optimizer then we built our first neural network and then we also built a convolutional neural network so these are the five chapters and along the way we also pick up a few other important concepts so for example we learn how we can integrate it with numpy how we get gpu support then we see a linear regression example we learned the typical pie chart training pipeline then we also pick up concepts like data sets data loader transforms and how to evaluate the model and also how to save and load the models so in my opinion these are all the important concepts you need to know to have a great foundation here so now let's get started first let's talk about tensors because everything in pie church is based on tensor operations a tensor is a multi-dimensional matrix containing elements of a single data type so this is similar to a numpy nd array but here we also have gpu support so now let's learn how we can create different tensors for this we import torch so this is the pytorch package and we don't have to install this this is pre-installed in the colab which is pretty cool now one function we can for example use is torch empty and then specify the size this will just use uninitialized values for example mt1 will only put in one value mt3 will put in three values in one dimension two by three will give us a matrix or even more dimensions of course then we can use torch rand this will initialize it with random numbers between zero and one or torch zeroes or torch once to fill this with zeros and once and now if we run this and then have a look at the different tensors then we see this is mt1 this is mt3 this is two by three then here we have the random numbers and here we have zeros then we can check the size of a tensor by either saying x dot size as function or x dot shape as attribute so in this example we see this is five by three and if we want to access a specific dimension we can do it like so so we either say x dot size zero as function argument or x shape 0 as index so this would print 5 in this example then we can check the data type by calling d type by default this is float 32 but we can also specify a concrete data type when we create this so here we say d type equals float 16 and now if we print this then we see the second tensor here has torch float 16. we can also construct it from different data like from a list or from a numpy array by saying torch tensor and then put in the data so now if we print this then we see we now have this as a tensor in this data type then this is also pretty important so a tensor has a argument requires great and by default this is false so if we set this to true then this will tell pytorch that it will need to calculate the gradients for this tensor and we need this later in the optimization step so we use this for all variables in our model that we want to optimize so later this will get more clear but for now just keep this in mind that each tangent tensor can have this argument so now if we print this we see it also prints requires grad was set to true for this tensor then let's talk about different operations we can do with tensors so this is also similar to numpy arrays we can do operations like addition or subtraction multiplication division so usually you just use this syntax so we say x plus y and what's important here is that this will do element wise addition we can also use the function torch at and then here specify the tensors but like i said usually just these operations are used or we can use a the function with an underscore so at underscore this will mean it is an in place addition so also just keep this in mind that you could also do it like this so if we run this and then print x y and c after the addition then you see we got the element wise addition then similar we can do subtraction multiplication or division then we can also do slicing so access different parts of the tens are also also similar to a numpy array so for example if we say x and then colon comma zero then this will give us all rows in column zero one comma colon will give us row one and then all columns then here this will give us only one element at this position and if we only have one element then we can call dot item so this will give us the actual float value not just the tensor so if we print this then you see these are the different ones and for example if we say x one one then we get this as tensor and if we say x11 item then we only get the float value then we can reshape the tensor by calling x.view this will return a new tensor for example this is a four by four and if we say x view then 16 this then this is only of size 16 then we can also use a minus one and for one dimension so for example if we say -1 8 then pie charge will automatically determine the correct size so in this case it needs 2 then so that we get 16 values so if we run this and then see for for the last one we get two by eight so then let's learn how we can convert a tensor to a numpy array or vice versa for this if we have a tensor and want to have this as a numpy array we can simply say um dot numpy and call this on a tensor so if we run this and then check the data type then now you see this is a numpy nd array but here we have to be careful so if the tensor is on the cpu then both objects will share the same memory location so if we change one if we do an in place addition for example and then print both of them then you see it also modified the tensor so it modified the tensor and the numpy array if we want to do it the other way around we can either use torch dot from numpy or again this torch tensor and here's a big difference so again torch from numpy will share the same memory and torch tensor will create an actual copy so if we run this and then for example modify this then you see c with torch tensor did not get modified but this from numpy got modified so again keep this in mind that this will share the same memory address now let's talk about gpu support so by default all the tensors are created on the cpu but we can also move them to the gpu if it's available or we can create them directly on the gpu so usually in the code you see a line that looks like this so we can check if we have a gpu by saying torch cuda is available and then we create a device so we say torch device and we name this either cuda if the gpu is available or otherwise simply cpu if you have multiple gpus you can also for example say colon and then zero to move it to the first device to the first gpu or colon one and so on and then if we have this and then we can move the tensors to the device by saying to and then the device so in this case if the gpu is available then this would be the cuda device and then it would move it to the gpu or if you just want to do it explicitly you can say x to cpu and use this as a string or x to cuda and again as a string or you can create them right away on the gpu if you say for example use a function and then as argument you say device equals the gpu device so this is usually more efficient because here we first created on the cpu and then move it to the gpu so this is a small optimization that you can do if you know that you need this on the gpu so yeah then create them right away so yeah this is how to work with tensors and now let's move on and have a look at the autograd the autograd package provides automatic differentiation for all operations on tensors so generally speaking torch autograd is an engine for computing the vector jacobian product it computes partial derivatives while applying the chain rule so you don't have to worry about this too much because all of this will be taken care for us but autograd is a very essential part of pytorch because whenever we want to train and optimize our neural network then this involves computing the gradients so this is a very important concept and i want to make sure that you understand how this works so whenever we want to calculate the gradient i mentioned this before then we have to set requires grad equals true so let's look at an example in this case we want to know the gradients with respect to x so when we create this tensor we say requires grad equals true and now what will happen is that whenever we do operations with this tensor then these operations will be tracked on a so-called computational graph so in this case we do an addition and store this in another tensor y and now if we print x and y and run this then you notice that y now has this attribute grad f n so the gradient function and in this case it's called add backward and this is because we do an addition here and then later we use a technique that is called back propagation so first we do the calculations in the forward direction and then we go in the backward direction and calculate the gradients so that's why this is called add backward and if you want to learn more about how backpropagation works i put the link in the description so yeah then we can access the gradient function and now if we do more operations for example here a multiplication and then a mean then you see after each operation we have this gradient function so here mild backward and now mean backward so all these operations are tracked and now if we want the gradient the only thing we now have to do is call c so the the last tensor and then c dirt backward so in this case this will calculate the gradient of c with respect to x so dc over dx and now x has a great attribute so if we print x grab before you will see this is none and now after calling backward and doing the back propagation now we have the gradient of x so yeah this is how the gradient calculation works we set requires grad equals true then all the operations will be tracked so usually this is the forward pass through our network and then in the end we calculate a loss for for example the mean squared error and then we call loss.backward and then we have the gradient of the loss with respect to the weights so yeah this is how this concept works and now this here is very important so we have to be careful because whenever we call backward then this accumulates the gradient for this tensor into the dot grad attribute so usually when we do our training we have a for loop so for epoc in epochs we iterate over the number of epochs and then in each time we call backward and now this would change the results so we have to make sure to empty the gradients in each iteration so for this we call something like optimizer zero grad or gradients.0 so this is very important to remember later this will be a no-brainer because i show you how a typical training pipeline looks like and then you hopefully always remember this but for now um keep this in mind that this would accumulate the gradient so we want to make sure to empty them in each iteration again so yeah this is how it works and as i said this will track the operations on a tensor so sometimes there are situations where we don't want to track this for example when we do our training loop and then want to update the weights or after the training when we do the evaluation then these operations should not be part of the gradient computations and there are a few ways how we can prevent this so we can use require scrat we can use detach or we can use with no torch dot no great so let's let's look at all of these examples so require scratch changes the flag in place so first for example if we create a tensor by default this is false requires grad equals false and now if we set this then afterwards this is true so here when we print require scrat and then we print the gradient function we see this is false and none and now if we change this to true and then again do operations then now the flag is true and now we also have a gradient function and the same way we can change this back to false then we could use dot detach so this will create a new tensor with requires grad equals false so yeah this will create a copy and this is a second way of preventing the gradient calculations and the second the third way is to wrap this in with torch dot no grad for example here we have a tensor where we have requires grad equals true and then we say with torch dot no grad and now again we do calculations and check if we have a gradient and then we see this is false so yeah you will see this very often when we do for example the evaluation of the model after the training so yeah now let's look at one example how to do linear regression with this autograd package so in linear regression we have a very simple function f of x equals the weights times x plus a bias b and in our example we want to approximate a very simple function f of x equals two times x and we ignore the bias so this is zero so let's learn how to do this and train this in pi charge so first we need training samples x and y so for x we have this tensor and for y we have this tensor and here the values are 2 times x then we need a tensor for the weights so w equals this and we initialize it with zero and now later we need to calculate the gradients with respect to the weights so for this we said requires grad equals true and now all operations we do with w are tracked on our graph so then we define a function that we call forward this will do the forward pass and give us the model output so in our example this is simply w times x then we also define a function to calculate the loss and in case of linear regression we use the mean squared error so we can calculate this by saying the predictions minus the actual values to the power of two and then the mean and now first let's create a test sample five and do the forward pass with five and now let's look at the output and then you see in the beginning the prediction is zero because our weights are zero in the beginning and now we want to find them the weights that we need so we want to train our model so for this we define a learning rate and also the number of epochs and then we say four epoch in range number of epochs then first we do the predictions so here we do the forward pass then we calculate the loss by saying the actual values and the predictions and now for the loss we want the the the gradients of the loss with respect to w so we only have to call l dot backward this is all we need to do and now our gradient w has this great attribute so here we wrap this in with torch dot no grab because we don't want to track this calculation and then we update our weights so this is simply the formula of gradient descent so minus equals the learning rate times the gradient and with this we update the weights and now as i said we have to to remember to empty the gradients before the next iteration so for this we say w dot grad dot 0 and then here we can also print the progress and then after the training do the forward pass again with the test sample and print this and now let's run this and this was very quick and you see after epoch 10 the weights are two it's actually already after epoch 20 and the prediction is tensor this is correct and this is how you can use the autograd package and train a linear regression example now let's go one step further and learn about model loss and optimizer and how a typical pi touch pipeline looks like because in the example before we've done a lot of these steps ourselves so we defined the weights tensor and the forward function to calculate the model output and also the loss but we actually don't have to do this ourselves because for this we can use a built in loss function and also during training here we've done the update calculations ourselves but again for this we can simply use a built-in optimizer so let's learn how to do this and a typical pi touch pipeline looks like this first we design the model so here we define the input and output shapes and then the forward pass with the different layers then we construct the loss and the optimizer and then we do the training loop with these steps first we do the forward pass where we compute the predictions and then the loss then we do the backward pass where we compute the gradients and then we update the weights so let's learn how to do the linear regression example with the pytorch way so for this we also import torch and nsnn so this is the neural network module that we need then here again we define the training samples and here be careful because pie torch pie church model classes expect this in a specific shape so for this we need an inner list here with all the samples so if we extract the samples and features by calling x dot shape and print this then you see we have eight samples and one feature so this is a shape of eight by one and then also for the test samples we need to put the five in a list and now the first step is to design the model so for this we create a class and we call this linear regression and a pie touch model class always inherits from nn.module and then we have to implement these two functions so the init function and the forward function in the init function first we don't have to forget to also call the super in it and now here we define all the different layers that we want to apply in our model so usually these are all the different layers in our neural network in case of linear regression we only use one layer so here we can use the n linear layer this will do exactly this calculation w times x plus a bias and in the forward pass so in the init function we usually define all the layers and in the forward pass we apply the layers so here we simply say return self.lim and this always gets the x so the tensor as input then after defining the class of course we have to create a model instance so we say model equals linear regression with the input size and output size and this is just the number of features so this is simply one so only one output for our linear regression example and then here we print the prediction before the training so i think this will again be zero and now the second step is to define the loss and optimizer so for this we can use the built-in classes so in case of the loss we say nn.mse loss so again this is the mean squared error loss and for the optimizer we get this from the torch optim module so torch optim sgd so this stands for stochastic gradients descent there are more optimizers available so for this check out the documentation and the optimizer always gets model parameters so these are the weights or the parameters that are optimized and then it also gets the learning rate so you can play around with this this is a hyper parameter and now we do the last steps of the training loop so for this we say for epoc in the number of epochs and now we do the three steps first we do the forward pass and for the forward pass we only have to call the model so if we call the model then this will internally call the forward pass and return self.lin so these are the predictions then with the predictions we calculate the loss so this again needs the actual values and the predictions and then we call loss.backward and calculate the gradients and now we only have to call optimizer step and also remember we have to empty the gradients so we also have to call optimizer zero grad and this is how a typical training loop looks like so we have the forward pass calculate the loss then we do the backward pass and then we call optimizer step and optimizer zero grad and this optimizer step will then update the model parameters and then here we print the progress for example here we print the new model parameters so this is the w and the b and then after the prediction we again call the model and do the forward pass and calculate the value so let's run this and you see that after 100 epochs we have a prediction so in the beginning this is our prediction so it did not get initialized with zero but with some random values and now after the training it is close to 10 so 10.099 so yeah this is how a training loop works the pie church way and now let's look at a little bit more complex example let's learn how to train our first neural network now to create our first neural network we actually follow this exact same pipeline approach first we design the model now in this case we define the different neural network layers then we define the loss and optimizer and then we do the training loop with these steps so if you have understood this approach here then you're already halfway there and the next steps should be pretty easy to follow so that's why i also want to add a few more concepts in this chapter so here i also want to show you how we can leverage the gpu how we can use the built-in data sets and data loader how we can use built-in transforms and also how we do the evaluation after the training so let's have a look at this code so again we do our imports torch and torch nn then now we also want torch vision and torch vision transforms and we also want matplotlib and now i've showed this before first we defined the device and we say the device equals torch device and now we say this is cuda if torch cuda is available and otherwise it's simply the cpu then we define the hyper parameters so i think it's good practice to define the somewhere in the top of your file or also in a separate configuration file so in this case we define the input size so in our case we use the mnist data set and this consists of images of shape 28 by 28 so if we flatten this then it's this size then we also define a hidden size and the number of classes so this is 10 because we have 10 different digits then we can define the number of epochs and a batch size and the learning rate and then first of all we create the data set and for this we create a training data set and a test data set and for this we use the built in torch vision data sets mnist data set this needs the route where this will be stored then we also say download equals true if it's not yet there then for the training data set we say train equals true and for testing we say train equals false and then also right away we can specify a built-in transform so in this example we simply want transforms to tensor because right now this could be for example a pillow image or a numpy array and if we specify this here then we can transform this to a tensor right away so this is how to use built-in transforms in the next example we see how we can add a different transform and yeah after defining the data set we always define the data loader so again a training data loader and a testing data loader and this is also a built-in data loader class that we can use and a data loader provides an optimized way to iterate over the data set so the data loader always gets a data set so in this case the training data loader gets the training data set and the test loader gets the test data set then we can define a batch size and we can also say shuffle equals true for the training and now we can iterate over this for example either in a for loop or just to show you an example i convert this to a iterator by calling iter and then we can say examples.next this will give us one batch of the data set and we can unpack this into the x and y so sorry x is the data and y is the targets and then in this case i simply want to do a simple matplotlib code and just print the first examples in this batch so you see these are the amnest digits and now let's create our pipeline so again first we create our network so our model so again this is a class neural net that inherits from nn.module and again we need to implement the init and the forward function so forward always gets self and x but init only needs self so here we are pretty flexible so for example we can leave this out if we don't need this configuration but in our case we want to specify input size hidden size and the number of classes and now again don't forget to call super in it and now here we define all the layers we want in our model so in this case we first want one linear layer then we want one activation function so for actuation functions there are also a lot of different ones available for example the relu is very common or we can use the softmax for example so all of them are also available in nn and then in the end we use another linear layer and now notice the input and output shape so first we specify input size and this will put in the hidden size as output and then the second layer gets the hidden size as input and the number of classes so 10 is output and then in the forward pass we call and apply all those layers so first we call the first linear layer then the relu actuation function then the second linear layer and then we simply return this and here be careful we don't want an activation function and also so no soft max at the end so this is because then later um we do the we use this but first of all um yeah after defining the class we then create the instance so here we say neural net with the input size the hidden size and the number of classes and now we want to leverage the gpu so here we have the device equals cuda because i could select the runtime is a gpu so now to use the gpu we need to push the model to the device and then later also for the tensors and yeah so this is the first step then we define the loss and optimizer so for the loss in this case this is a multi-class classification so for this we usually use n and cross entropy loss and for the optimizer again you could use sgd or in this case we use the atom which is also very common optimizer and again this gets model parameters and the learning rate and yeah if we check the documentation then we notice that the cross entropy loss needs the raw values in the end so that's why we don't put a actuation function in the forward pass in the end so yeah just be aware to check the documentation what this needs as the input and now we do our training loop that we've seen before and now we have two for loops and usually we always have two for loops so the first one iterates over the number of epochs that we specified so for epoch in num epochs and now the second one iterates over the training loader so like i said we can iterate over the data loader in an optimized way and this will now iterate over all the different batches and here again we extract them so we extract them to the images x and the labels y and now um we need to reshape the images to be in this size so for this we can use tensor.reshape and then also remember to push this to the device because now we want to use the gpu and the same for this so if you forget this for example here then i think your code will crash so if you move your model to the device then also make sure to move the tensors to the device so um then first we do the forward pass and the loss calculation so here we simply call model then we call the criterion so this is the loss so usually yeah we call when we create this we say this is the criterion then we call the criterion with the output so the predictions and the actual labels and now this is our loss and then we say loss backward optimizer step and optimizes zero grad so these are always the same steps and now we can run this and then this should do the training and print the steps so now training is done and we can see that the loss slowly decreased so i think this is working um of course you can increase the number of epochs and train this even longer but now let's see how we can test the model after training so how we can evaluate this and for this we usually say with torch dot no grad because now we no longer need the gradient tracking and now we iterate over the test loader so we say for images and labels in test loader then again make sure to reshape this and push this to the device and the same for the labels then again we call the model and get the predictions and now to get the actual predicted values the predicted classes remember these are just the raw values now we call torch.max this will give us the output value and then also the index and this is the index of the prediction and then we compare this by saying predicted equals equals labels dot sum and add this to the number of correct values and then we divide this by the number of total samples and this is the accuracy and now if we run this and print this then you see the accuracy of the network on the 10 000 test images is 97 so this is pretty accurate and this worked pretty well now let's learn how to create a convolutional neural network or short cnn so in the previous example we simply used a fully connected neural network with one hidden layer so here we applied two linear layers and one relu activation function in between and now we learn about convolutional layers max pooling layers and also how we can save and load the model so the first part is very similar again we have our imports then we define the device and the hyper parameters then we again define the transforms we want to apply and in this case we want to apply two so we put this in transforms compose and then as a list first we transform this to a tensor and then we normalize the images to be in the range minus one and one so for this we applied this as the mean and this as the standard deviation for all the three color channels so this will normalize the images and then we have a tensor in the range minus one and one then in this example we want to use the build in cypher 10 data set so again we can find this in touch vision data sets cipher 10 and we specify the root training download and now the transform and by the way of course if you want to use your own data sets for example then you can use datasets.imagefolder and specify the folder where we where you prepared your own images but the approach is again the very same with this training data set and test data set then after the data set we define the training loader and then the test loader so this is the built-in data loader with the data set the batch size and shuffle equals true for training then we define the classes so here i want to show you how this looks like so i have this little helper function then again i create a iter object and call it our next and then in show and now if we run this then this will download the data set the first time we use this and then plot this and yeah this is how the cipher 10 data set looks like so now we deal with color images and now we define our convolutional neural net so again we define our model class confnet this inherits from nn.module then again we need to implement init and forward and in the init we again call super in it and then here we define all the layers that we want so now we want to apply a few convolutional layers and in between we want max pooling and we also want again relu actuation functions and then in the end of our network we want to apply two linear layers because we want to do classification in the end so the conf 2d layer if we hover over this and have a look at the documentation then you see it gets the in channels the out channels and the kernel size and then some more optional parameters but these three are important and the first one three input channels is because we have three color channels in the images red green and blue so this is fixed and with all the other parameters you can play around it's only important that the output size of the previous layer matches the input size of the next layer so if we have 32 as output size then we need 32s input size here and then after all the convolutions we need to make sure to know the size of the linear layers and for this i show you how i can determine this so yeah here we define all the layers and we also define max pooling of size two by two so if we google max pooling then we see this is a very simple operation to basically it reduces the image size for example if we have a two by two window then it looks at this window and simply takes the largest value and writes it into the output and the same for the next window and so on so this is how max pooling works and this is a very popular layer that we apply in convolutional neural nets so we also define this in the in it and now in the forward we apply all the layers so in our case um the input shape of the of x is n so the the batch size then three color channels and 32 by 32 is the image size then if you are not sure um how this looks like what i like to do is that after each step i print x dot shape and then i run this and have a look at the output and if you do this then you see the first convolution gives us this input at this output sorry then max pooling with this size will usually cut this in half so it's only 32 by 15 by 15 then we get this output then again we do pooling then again we get this output and now we see 64 and then the image size is only four by four so now if we flatten this we get this number as input shape that's why here we have to apply 64 times four times 4 and then again 64 and here 64. and the 10s output is fixed because we have 10 output classes and yeah so this is how now our convolutional neural network class looks like and one thing to note here is that in this case we use f dot relu and apply the actuation function directly so we could do it this way or we can also like in the last example we defined this as a layer and then called it like this so i think it's just a matter of personal preference both are perfectly fine so yeah in this case we apply them directly here and don't put them in the in it but yeah this is how we can implement a convolutional neural net then again we create our model and push it to the device then again we create our cross entropy loss and the optimizer so this is the very same than before then we iterate over the number of epochs and we iterate over the training loader then again we need to push this to the device then we do the forward pass by calling the model then calculating the loss and then we say optimize a zero grad loss backward and optimizer step you could also do it the other way around like before so this is not important it's only important that after each iteration or before each iteration you make sure to empty the gradients again and in this example i want to calculate a so-called running loss so for this i initialized this with zero and then for each batch i add this to the running loss so here we say loss item and then i divide this by the number of total steps so this will give us the average loss for this epoch and but again the the rest of the code is the exact same than before the only difference is now that here we use a different model and now let's run this and then um do the training and after the training to load uh to save the model we call torch safe and we could um only put in the model here and the path where we want to store this this would work but usually you call model state sticks this will only store the dictionary with the parameters so the train parameters and yeah this is how we can save the model and now let's wait until the training is finished all right so training is finished and we can see that the loss decreased so this worked and also if we have a look at the folder then here we see this cnn.pta so this is the saved model state dict and pth is just a very common file ending for the pytorch models so yeah this is the saved model and now we want to load this again so since we only saved the state dict and not the entire model we have to create a new model instance so we say loaded model equals confnet and then we call loaded loadedmodel.load.statedict and be careful this doesn't get the path but it gets the loaded object so here we say torch load path and then we push this to the device since we want to use the gpu again and we also say loadedmodel.eval this will just set some internal configurations so that that they are better for the evaluation and not for training for example if we hover over this then you can see it affects layers like dropout or batch norm um yeah just remember this to set this to evol if you don't train the model and then here we do our evaluation so this is the same then before we say with torch no grad then we iterate over the test loader push it to the device call the model and then call torch max and compare this with the labels and call the sum and i do this one time with the model from the last step and one time with the loaded model and then i print the accuracy to show you that this is the same and now if we run this and evaluate the model and also the loaded model then you see this is the exact same accuracy so 70.71 accuracy um this is not perfect yet but it's working and of course you can play around with the number of epochs and train this longer or play around with the hyper parameters or with the architecture here so yeah i encourage you to improve this yourself but now you also know how to save and load the model and yeah these are all the important steps that i wanted to show you in this crash course i hope you really enjoyed this course and found this helpful if you have any questions let me know in the comments also if you want to learn more about the underlying deep learning concepts then have a look at our deep learning explain series here on youtube and also if i went over a few of these concepts too quickly and you want a slower approach then i have a more than four hours long pie church course also on youtube and i will also put the links in the description so feel free to check this out and then i hope to see you in the next one bye
Info
Channel: AssemblyAI
Views: 49,843
Rating: undefined out of 5
Keywords:
Id: OIenNRt2bjg
Channel Id: undefined
Length: 49min 54sec (2994 seconds)
Published: Sat Jul 09 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.