Transfer Learning for Image Classification with PyTorch & Python Tutorial | Traffic Sign Recognition

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
yes this video we are going to learn how you can create a deepening model that recognizes traffic signs from images and this is really important especially when you're building for example self-driving car because you would want to just to first detect detect or detect traffic signs and then you would want to classify or recognize them so one of the largest data sets that we have to date is the German traffic sign recognition data set or benchmark and it contains over let me go in here this is the website of the data set and it contains over 50,000 images in total which are of course annotated and it contains more than 40 different traffic signs so we have single image multi-class classification problem and here are some of the examples that we have in the traffic science so given this amazing data set let's go ahead download it and build a PI torch model using a pre trained ResNet model learn how to fine-tune it or do some transfer running with it and after that we are going to see how we can basically predict what traffic sign there is on the image so the first thing that I'm going to do is to open a Google co-op mode book and in here we are going to check first that the runtime is Python 3 and we have a GPU acceleration enabled my notebook is already connected so I'm going to check the video of the GPU card that I have nvidia and we have the p 100 which is great next i'm going to just copy and paste a lot of imports and we were basically importing torch or python numpy bundles and doing some port setup next I'm going to download the dataset itself using W get and the command will be available in the notebook that I'm going to link down into the description while this is downloading I'm going to go ahead and paste the command that I'm going to use to unzip this dataset and it's the interesting thing around this is that I'm using double Q in here which will basically not allow any verbose very positive from the unzip command and if I check the data in here we have the zip file which is too sick 263 megabytes and we have the extracted data set we have the name of the data set of the benchmark we have final training than images and each subfolder in here is basically our single a bunch of images that are related to a single traffic sign so let me go ahead and basically get all the traffic sign folders and I'm going to use the out complete in here final training then images alright I have this now and I'm going to check the number of traffic signs that we have it's 43 all right and something interesting about this dataset in particular is that we have the images which are stored in this very at least strange for me format ppm but OpenCV is able to handle this format very well and I'm going to do some exploration on this data set of course we have a lot of images and I'm going to paste him basically two helper function the first one is going to be shown images and we are basically using OpenCV to read the image converted to RGB format and show it using map or clip and the next one is going to be the world image function we are doing roughly the same thing but this one accepts an optional parameter which is set to true which basically resize the image into a square and I'm going to show you why this will be useful in a bit alright so one good way to basically have a look at the images or a couple of images is to basically draw a grid from the images so I'm going to define a function called show sine grid and this one is going to accept a list of image paths in here I'm going toward all of the images for image in image paths then I'm going to convert those images into a single tensor recall that this is actually a list of number arrays so each image wallet using OpenCV is just an umpire array so we're going to convert or con cut or merge all of this into a single tensor next I'm going to go ahead and basically permute the color channels of this tensor so we can use the torch vision you choose make grid to create a great image of all those images I'm basically saying in here that I want the book cover to be first in here then I want the red color and then I want the green color at the worst place then I'm going to basically or just build a grid image using torch vision you choose make grid and this accepts the tensor of the in with the images and then I'm going to specify the number of columns which is n roll I know here I'm going to pass him that I want 11 image or 11 cones or 11 rows however you wanna call it next I'm going to do a setting that will tell that the image that I'm going to show is going to be slightly larger than default I have 16 by 8 let's say then our shoulder image itself but again I need to permute the channels so mapper clip is able to show this alright so the next thing that I'm going to show you is how you can basically sample an image from each traffic sign class for each train folder in train folders I'm going to use numpy random choice to and use growth to list all the files in here I'm going to take each image convert it into a list and get a single image from this list ok so I'm going to call show sign grid sample images and if we let this run you can see reduce the size of this include 14 7 all right so you can see a grid containing each and traffic sign a sample of each traffic sign and you can see that we have a lot of speed limit sites science at the first row we have the classical priority Road stop give way no entry and other traffic signs that you should be able to recognize every one of those actually and this will basically end the exploration step for our data set ok so the next thing that we are going to do is to build a data set that is based on this original data set and this might sound a bit strange but the reason for building a sub set or using a sub set to build a data set is that this data set is actually very very very imbalanced so I'm going to start with a simple problem and then you can basically try to expand on this by the simple problem is going to be to build a dataset that contains only four different signs and those four different sides are going to be the priority road the give way sign if you see this one the stop sign and the no entry sign sorry the no entry sign so in here I'm going to continue with building that data set and I've already told you the quest names and the quest indices of those sorry the cross names and these are the indices in which every one of those so this correspond to the indices in here if you take a list from this one and the indexes of this one will correspond to the different class names here so let me begin by creating a new folder that we are going to call just data and in here I am going to basically create a split for the train data for the validation data and the test data and we are going to use the Train in the validation data to basically evaluate and train our model and then we are going to have a look at how well does our model do on the test data so I'm going to just go ahead and start by making the folders data train and I always say that exists okay is fine then for the validation and then for the test data and then for each cost name I'm going to create a folder in these ones because well this is a standard format that you can use when you're training image crossfire so there are helper classes called image folder which are basically able to create a dataset from a folder of images if they follow the proper format and we're going to have a look at how you can do that right now so for each class in class names I'm going to basically get all of this paste it in and then I'm going to just paste in the class name here all right so cost names is not defined I have to run this one okay so if we have a look at the the directory structure or the directory tree we have the Train test and validation subfolders and for each class we have as a folder in here so of course those subfolders are currently empty and we we'll have to basically get the images all right five more felt that's good okay this looks fine okay so basically the next thing is going to be to do some copying of the images to the correct subfolder so I'm going to go ahead and paste in the code for that because it's a bit verbose and for each index in here from these ones I'm going to take the cost image the cost index and I'm going to take all the images in this class using go up again and convert this into an umpire array so I'm going to also take the cost name so this will be the name of the class of course and that is corresponding to this cross index I'm going to print out this and then I'm going to shuffle the images because well they might be some strange order at which they were probably created but I'm gonna shuffle them just in case next I'm going to use not pi split so this is an interesting function and I'm going to go ahead and basically do this so you can see what is up in here okay so this non PI split allows us to create a trained validation and test subset from this particular traffic sign using indices or sections so in here I'm basically saying that I want 80% of the images to be spent to be put into the training sub set or set then I want the 0.9 minus 0.8 which is 0.1 to be to be put into the validation set and the remainder which is 0.1 of course we put in the test set so you might be or you should be very familiar with the Train test split used by the socket worn out provided by the socket or library but this one number I split is also very good and it allows you to split your data in n different partitions if you of course if you use this form up and understand what the indices or sections mean so this is very very good I didn't know about this one and next we are just iterating over each image in the train validation in the test set and putting it into the correct folder and copying it to the correct class name so let me execute this and as a bonus we are receiving the output which tells us how many examples do we have in each for each traffic sign and you can tell that we also have quite a lot of imbalance in here but this is a very this is kind of ok of course the stop sign is not very well presented so we are going to check the accuracy for this one later on alright so the next thing that I'm going to show you is how you can create transforms that allows you to basically augment your image training or validation and test data sets so I am going to use the transforms that are available from torch vision I've also created a video which is using argumentations library which is basically very very I provide you a very very nice interface for creating amazing really amazing transformations for images of or augment images but the third vision transforms transformations library or sub-module is very good as well when you want to something let's say a bit more basic not that there are really transformations that are very there are really transformations that are very good that our documentation slavery is pretty fast and simply amazing alright so I'm going to copy and paste the transformation code because it's a lot and we have transformations for each subset for the train for the validation and for the test set and the transformations are just compositions so in the training data we are doing some cropping we are doing rotation we are doing horizontal flipping and finally we are converting all this because this one expects an image or view image this is distance from nation basically converts the pew image or image view image array data to type of tensor and then we are doing a normalization which is expected by the pre trained model that are provided by the torch region I'm going to put a link into the normalization of the requirements for the three trained models in here and you can see that we are basically yet we are just yep we are converting those images into 20 or 256 pixels and this is also required by the pre-trained models at least for the resonant models in touch vision okay so let me execute this and the next thing that I am going to is to build data sets so let me go ahead and say that the data directory is going to be data and then I'm going to create a data set for each sub set of data that we've already created from the training for the relation and the test set and I'm going to use the data sets dot image folder data set which basically helps you create this I'm going to do some data sets image folder and this one expects a root directory and then transform the root directory is going to be the joint part of the data gear which we already defined and the current sub directory and the transform is going to be the chosen transforms for this particular subset of data let me expand this like so okay so now that we have the image folder data set I'm going to go ahead and do the iteration over the Train validation and test sets let me execute this alright and of course when you're building a dataset you need also a data water and the data water that I'm going to use is going to be based on the tour cheaters data data order so it will be just the basic one and this one accepts the dataset then I'm going to pass in the batch size and I want to have four images per batch next I want to shuffle the data and I want to specify the number of workers which is going to be four so we have basically four workers that are loading the data for us and then I want to iterate over in the same subfolders as in here alright so the next thing that I'm going to do is to check if we have the could available accelerator I'm going to take the data set sizes for each one so how many examples do we have in each data set and I'm going to basically record the number other each class name so let me start with the check of the device and this will be cooler if we have torch CUDA is available else is going to be just CPU next I want to take the dataset sizes okay and finally I want to take the quest names which are available from the dead sets themselves so I'm going to just take in the training data set and take the quest names choose some data sets leave bad name in combination I guess all right so the next thing that I'm going to do is to define another function which is called image in Show and this one will show us a image that is taken from a data water that we've already defined and here I'm going to use the train data water to take a batch of images I'm going to convert this into a grid and basically use the function from above in show to show the output and use each cross name and show this into the title alright and you can clearly see that the transformations are applied to some of the images we have some rotation some random cropping so this should know this probably should increase the accuracy of our model because we are creating even more training data for the model and it's been shown empirically in a lot of papers that at least I've read that image segmentation really is helpful whenever you're training conversion on neural net and we're going to train a convolutional neural net which is going to be based on ResNet architect our bed set is very well done I at least I hope so it's a really beautiful one we have to basically create a model use that model to the to recognize traffic signs from images like those so there is a simple or let's say there is a way to do this and to start from scratch create a convolutional neural network for example in our case this might be quite good because we have images and we have image classification problem and we might start by let's say creating a simple convolutional neural network then trying to optimize the architecture optimize the parameters and go through all of this and basically we might not get as good model that we are going to get using basically the technique that I'm going to show you right now and the technique I'm going to show you is of course we are going to use a pre trained model and the big trend model is going to be based on the pre trained model available into the torch vision module which is called resonate the resonance architecture is very interesting and I'm going to link this blog post into the description which is from which blog post is Argus chinos christy DRD I'm not sure how to pronounce this name but this blog post is very good good job mate so the basic intuition behind the resonate architecture is that well when you're building very very very deep convolutional neural nets you expect that when you're adding more and more layers the neural network is you your expectation should be that you Bannu is going to be more and more accurate because you're basically putting more and more parameters in it so some of the deep neural nets that you might have seen this is for example a resonant 34 and it's going to be the architecture that we are going to use you can see that we have a very very deep neural net in here and in practice when you do this using conventional convolutional layers it turns out that in practice again you do not get better models actually the performance degrades thanks to the curse of dimensionality problems and the vanishing gradient problems so basically the thing the signal of the input is worst when you have multiple layers so one way or one let's say it this is just a hug one way or one hug to overcome this issue is to basically go with the input pass it through the network but also provide an identity function which basically feeds the input directly into the output of the layer so you have as the output layer you have this equation which is basically the approximation from the weights of these layers in here and then you have to add the input itself so whenever the network decides that basically the function approximated in here is pure garbage you can just skip all this by saying that you're inputting zero in here and just outputting the input so you can basically do a drop out or eliminate the sub Network altogether at least that's my or skip just basically skip this Network when the value of this function actually is zero so you can read a lot more in here and of course you can read about this into the original resonate paper which I'm going to again link into the tutorial that is going to be available for this video so how can we use the ResNet in pi torch torch region provides all of pre-trained models and i'm going to show you how you can use those alright so here I'm going to define a function create model which is going to basically create the ResNet model do some adjustment that are available for our case and finally return this model so we can train it alright and here the only parameter that I'm going to passing is going to be the number of classes that we have first I'm going to create the model and this is available in torch vision models resonates and I'm going to take the turkey for layers and I'm going to pass in the pre-trained tool so I want a model that is already trained on some data set and this one is actually trained on the imagenet dataset all right so the next thing that I'm going to do is to basically replace the last layer which is full connection all fully connected layer and I'm going to basically replace it with a linear layer that has the number of classes as an output and then I'm going to put at the end of all of this a softmax function because yeah we are doing kimochi a multi-class classification problem and our linear layer is going to expect a number of input features so I'm going to first take those and this is going to be based on the model from the fully connected Fleur FC I'm going to take in features all right and then I'm going to replace the fully connected layer with any sequential so this is the sequential API that PI torch is providing which you can use to create a model without actually extending for from torch and then body and in here I'm going to put in a linear layer get the number of features in here and the number of classes as an output and as I've already discussed I'm going to pass to assign a softmax function or layer at the bottom of it and I would like to apply the softmax function across each row of the predictions so each row in the output is going to be summed up to one and finally I want to return the model but first I'm going to pull it onto the device that we were going to use in our case this will basically move the model to the GPU all right so using this function I can create a base model and we are actually going to reuse this function and you are going to see why in a little bit here I'm going to pass in the number of class names and something magical is going to happen right now you can see that the pre-trained model is just pi torch check point which is available from this URL and pi torch torch vision is smart enough to download it and it's a bit hard to see what this model is 83 megabytes you can opt out for larger models they will train a bit slower but might be very very very very accurate compared to this somewhat basic model but we are going to start with simple models and then if we only need something more complex only then we are going to go ahead and basically do the complex ones so this is general a practice that I am doing in my work as alright let's continue with training the model that we've already created so I'm going to go ahead and for the section training our model and the code that is used for training the model is quite long I'm going to paste it in and run the cell in here so we have the train model function which accepts a model and the number of you clock epochs that we are going to train our model for and we're using the worse function cross-entropy was because we are doing multi-class classification we are using an stochastic gradient descent optimizer which accepts the parameter of our model we have a warning crate set in a momentum and we have voting rights scheduler which we are basically not going to need but you might wanna see how you should structure your training called in those training examples like this we have the best model weights which we record based on the best trained our very best validation accuracy that we achieve and then for each epoch we are going to go into two phases for the training and validations phase we are going to check and put the model into a train or evaluation mode so basically the evaluation model package will turn off something like I guess dropout and couple of other options that might be available and you training those drop out and something else is Trimble so for each data point or but sigh a bar sample we have to put the data onto the GPU then we're zeroing the optimizer because if you recall that PI torch is going to otherwise accumulate or the optimizer are going to accumulate the gradients which is something that we don't want to do and we are basically going to turn on or off the accumulation of gradients only when we are in training mode and we are going to take the model output get the prediction classes by just taking the maximum value in here this will be the predicted classes then we are going to calculate the worst basic based on the real classes and outputs of role model and if we are in training mode we are going to propagate the error and then do optimizer step we're going to then calculate the Wars and number of corrects examples that we've seen we're going to do the same thing for the epoch and for the worse and the accuracy and if we are into a validation mode in we are basically finding the best validation accuracy so far during the training we are just going to copy the the model state and use those weights are as the best ways that we currently have and if we are currently training for each epoch so remember that this you should call the warning schedule only into the training phase and only once for each epoch here I'm calling the schedule the one in red schedule dot step finally I'm basically bringing out the best validation accuracy and awarding the best weights that we found during training so this is pretty pretty standard stuff but yeah you might want to have a better look at this method the next couple of functions I'm going to just continue with faith let's first try this one let's just train the model and I'm going to say that I want to train it for three epochs so this should take some time and during this time I'm going to define these two functions get predictions in here I'm going to get to put the model into evaluation states and I'm going to get two lists one for the predictions and one for the array of values and I'm going to use the context with talk on that so I don't want the gradients to be calculated in here and for each inputs and labels in each batch in the data water alright so you can see that after the zero to the first epoch we have an accuracy of 99 99 percent on the validation set which is extremely good and the training accuracy is still quite Waldo well we have of course much smaller validation that said and this can be expected because here we have of course imbalance dead set - so let's see how the training and the validation errors are doing come the next ones okay so next in here I am going to take the two basically I'm going to paste in this code put the labels and inputs onto the GPO device I'm going to take the outputs of the model and get the predicted classes and I want though to take each Max value I want the rolls on the cards I'm going to add the predictions and array of values then I'm going to basically converts the pictures to a tensor and I'm going to put all this on to the CPU and I'm going to do the same thing for the real values and finally I want to return the predictions and array of values and I'm going to run this so the training was quite fast and as you can see we have a training accuracy of 95% after three epochs in ninety nine point eighty four accuracy on the validation set so you might want to try to maybe train a bit longer but this should actually be pretty much fine for us alright so the next function I'm going to show you is going to be the one that we are going to need for evaluation of our model so basically get predictions is going to be in this same section so the next thing I'm going to do is copy and paste the next function show predictions and let's walk through through this one this one accepts the model the cost names and number of images that we want show so we are again turning the model into putting the model into evaluation mode and we don't want to again accumulate any ingredients or calculating ingredients in here and we are just iterating over the test set so we want to see how our model is performing on the test set so here we have the inputs and the labels put on the GPU we have the outputs basically this is the same thing down over here and then for each image I am going to just put the image and the predicted class name for this image so let's basically continue with using this function show predictions pasting the base model the class names and the number of images which is going to be eight we just want to see eight images all right so priority road giveaway priority road priority roads stop stop give way give way so this is actually very very good all right so let's have a more comprehensive view over the model performance and we are going to get the predictions from the test set we get predictions based model and we are passing in the data orders test okay our function is running and next I am going to print the pacification report for the Rio test data and the predicted test data and I'm going to pass in that class names that we have for loop targets Wow as you can see we have a very very good model on our hands it's basically perfect so this is very good job the next thing that I want to which of course here might not make any sense at all because we've seen that we have perfect precision recall for each class is - how to show a confusion matrix and then basically using the Seabourn heat map function I'm rotating the labels and putting to label and predicted label in here the next thing I'm going to do is to create a confusion matrix using confusion matrix that is provided by socket world passing the predicted the true and predicted values then I'm going to convert this into a data frame I'm going to put the cost names as index and the column names are going to be the class names again so finally I'm going to show the confusion matrix using the data frame that I've just created and you can clearly see that we have yeah actually we have one miss classification so this is probably a rounding error in here so we have one miss classification and miss crucification was that the true level was priority route but our model said it was give way which is very interesting but I recorded bolts yeah so it messed up this one with this one that's interesting okay so let's continue with basically doing a prediction on an image that our model haven't seen and it's a image that is basically not from the same data set at all so let me just download it this is hosted on my Google Drive so you should be able to see this image and I'm going to use the show image function that we defined at the top stop sign or JPEG okay so we have an image that is very very different from the images that you've seen into the dataset how well we our model be able to come up with this and to understand the prediction of this one I want to create a function called predict problem or predict probability that will basically predict the probability for each available was given an image and this is nowhere near clear I'm going to just do it in after I showed you the diagram or the what you should be able to get one saying I'm sorry take the image the model in the image pot and first I'm going to watch the image using view image then I'm going to convert the image to RGB for month and I'm going to apply the transformations from the test set because this is a text image and I'm going to remove one of the dimensions so I'm going to flatten it I'm going to take the predictions from our model and I'm going to pass in the image that is passed on to the GPU and I'm going to take the prediction convert it into an on by and flatten it all right so let's predict the proper for this image that we have spot sign JPEG or JPEG and let's see the output of the pret and you can see that and these are basically the different probabilities for each of our Bach was we have four classes recall let me show you the quest names so giveaway nine three priority wrote and stop so given this information let's create a data frame of the predictions put in the class names and the fellows resort complete the screening of the predictions then I'm going to use see bottom bar quad to basically show you the predictions from our model the axis are going to be the values the Y's are going to be the class names the data is going to be the prediction data frame and I want to orient this horizontally and this is the result which is amazing actually our model is very very certain that this is an image of a stop sign and recall that this is a random image from the internet which is nowhere near as similar to the other images in our dataset which is really amazing ideas okay so let's continue with a sign that is from our data set but it's not actually included into the images or the indexes that we are basically showing around so let's get the no truck entry trying out with no truck entry sign and this one should be very interesting so the index of this class is going is 16 and I'm going to take a single image from this dot PP n like this yep and take let's say the first or the second element from here next I am going to basically do the same thing that I've done here but I'm going to pass in the no drop box and again I'm going to create a data frame with the predictions class names and values I'm going to show you the image that we're looking at so this is the sign and well you can stop in here and think about it what should our models say when it sees image like this so one good response I guess for this would be to just say that it's really not certain in any of the classes so it would be a good hopefully a good idea to have an equal amount of probability or certainty in each class so I would expect a uniform distribution but let's let's see what our model really thinks about that I'm going to just copy and paste the same purport and the result is whoa so our model thinks that is almost 60% certain with almost certain sixty percent certainty that this is a stop sign so and it's a give way sign so well when you're building a self-driving car and when you see this sign and if the car is actually a truck if your model understand this as a stop sign and then the truck stops and then continues on its way of course you are going to drag the wall because your truck that is trespassing into an area that is not that is not approved for trucks areas okay so how do we solve this problem so one way to solve this problem is to add a new class which is kind of a meta class or Adam Equus which are going to call cross unknown and this class is going to be a mixed bag of all the older examples or all the other traffic size that we have in our dataset so I'm going to show you how you can build this and hopefully this should solve the problem of having signs that you don't really know about because well in the real world you might actually not have enough examples at all for each available class and you might just be interested in knowing that the cost that you're seeing is something that you don't know and then you might say if you for example building a self-driving car you might say to the driver well you should decide in here to what to do next so let's continue with adding this class was unknown so this is like an alien vibe in here okay so we are going to basically take all the classes that we haven't looked that and we are only going let me reformat this for you we are iterating over each training folders that we have and we are skipping the cross indices that we have and we are also skipping the no track index so we are going to really emulate or simulate the case that our model is not really she has not really seen any examples of more truck science and after that we are going to just create this non indices and then I'm going to basically create the folders that we need for this class let me show you the code from this for the training validation and test set we are building the unknown class so we are adding this these folders and in each for each index of those classes that we have picked I'm going to take 50 examples at the most sometimes those might not be 50 and then I'm going to basically do the split that we did for the other examples or other classes that we have so I'm just adding a new cross with a wall of data in it and this should probably run just fine actually okay so I am going to continue with defining the data sets and data waters and dead set sizes and cross names just in the same way that we did already so I'm going to just copy and paste the code for that the dead sets then the data waters the dead set sizes and cross names and the cross names right now should be given on three priority Road stop and unknown the new cross that we have here and let me check the data set sizes let me just go ahead and grab this we have 764 are examples in the training data with the unknown cross so this course is somewhat well represented using the 50 at most examples for each sign and next I'm going to build or create a new model and I'm going to call it the enhanced model so this is no longer unlucky model if you know what I'm talking about we're passing in the cross names and next I'm going to train this model using pretty much the same procedure as before and again I'm going to train it for 3d box this might take some time so I'm going to just pretty much get the data the I'm sorry going to just copy and paste these cells from here I'm going to copy the cells copy selection hopefully and go down here paste them and I'm going to replace the base model in here with enhanced model I mean hopefully this should give me a much better result okay so while this is training I'm going to paste in the evaluation steps that have already shown me how you can do but this time I'm going to take those for the enhanced model let me just copy and paste this Oh so this one is having a validation accuracy of 100% at the epoch number two already and you can clearly see that we are starting at a bit forward training accuracy and then the training accuracy is actually blowing up and after this epoch is complete we have ninety-four train accuracy not four percent of training accuracy and again 100 percent of test accuracy so let me just predict all of this so you can clearly see that we have five classes in here and let's run this this is again the same image and now we have a prediction of pretty much 100% accuracy probability or certainty that this image belongs to cause unknown and recall that our model haven't seen the no truck entry signs whatsoever so this is pretty amazing next I'm going to show you some predictions from this model we have the unknown unknown give way stop unknown this looks very good in fact I'm going to get the predictions from this model and print the kouzef occasion reports you can see that our model is again doing very very well and the confusion matrix this time around looks pretty perfect to me so now let me just check yeah we have 38 plus 139 and when we add the four we have 43 so yeah I think that we don't have the unmould indices let me just make sure that no index is 16 and you can clearly see that the index 16 is missing from here so I just wanted to basically double check that we are not including the Train the data for the more drug pot sign okay so this is pretty much a great way to build traffic sign recognition model and I'm going to put a tutorial and link to the mod book into the description thanks for watching guys please like this video subscribe and I'm going to continue with these PI torch tutorials I would like to thank you very much for the huge subscription at the huge amount of subscribers that have gathered with was video on the corona virus and I know that it's a bit well the WH o now declare it officially a pandemic so the things are really tight I guess but thank you for watching I'll see you in the next one bye right
Info
Channel: Venelin Valkov
Views: 6,366
Rating: undefined out of 5
Keywords: Machine Learning, Artificial Intelligence, Data Science, Python, PyTorch, Image Classification, ResNet, Traffic Sign, Self-driving car, Tutorial, Jupyter Notebook
Id: yYlVOrbV_KY
Channel Id: undefined
Length: 65min 36sec (3936 seconds)
Published: Wed Mar 11 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.