Pytorch Tutorial 6- How To Run Pytorch Code In GPU Using CUDA Library

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello all my name is krishnan welcome to my youtube channel so guys today in this particular video we are going to see how we can use gpu to execute our pytorch programs you know suppose if we are creating an artificial neural network then how we can use gpu in order to execute this pytorch programs itself so this is the sixth tutorial guys uh if you remember i have completed five tutorials we had also solved some of the kaggle competitions that is advanced highs for house price predictions and kaggle pima diabetes using a n so we'll continue this uh and probably this is the last video with respect to a n after this the seventh tutorial will be on cnn and yes i have also prepared the materials for that so more more frequently now the videos is going to come with respect to pi torch so let's go ahead and try to see this now before going ahead guys if you have some bit idea about uh if i talk about tensorflow right so if i talk about tensorflow uh suppose uh i'm just going to go in inside my environment uh activate my env this is basically a tensorflow environment that i have actually created if i just go and write python and if i just write import tensorflow so when i write this you will be able to see that if you have a gpu and if you have installed the cuda libraries like cuda toolkit and the cu dnn so you can actually refer this video of mine which is there in the deep learning playlist how to install cuda toolkit and cu dnn for deep learning this is specifically required for both tensorflow and pytorch it is very very very important okay without this definitely you will not be able to run now if i see over here you can see that i have already installed tensorflow gpu in my machine and if i try to show you you know in my c drive you know so if i go inside my program files if i go inside this one i have cuda 10.1 installed okay so this is pretty much important apart from that i can also check i can also check my uh if i write nvidia smi probably i'll be also able to see the 10.1 cuda version right and probably in my workstation i have titan rtx so i can definitely see that now in tensorflow suppose if you don't have any gpu then what it is going to do and once you install pip install tensorflow gpu you know um if you don't have gpu it will directly switch towards cpu but in pi torch you have that particular option you can run out run uh the programs in cpu also in gpu also okay so that also we are actually going to see uh apart from this guys uh if you have not seen my other videos and deep learning i've already explained about tensorflow a lot but in this video we'll just focus on pytorch which is pretty much important now what i'm going to do over here guys if you remember i had created uh this one creating a n with pi touch on pima diabetes data set i've just made a copy of this over here and i've written and training on gpu right so this was the whole program you can see diabetes.csv df.i actually executed this whole thing with the help of pytorch you know and we had seen the confusion matrix and everything as such uh and again the video has been put up over here this is basically the video solving kaggle pima diabetes using a n so let's first of all again i made a copy of this this is the same file that i see over here and i've written as tutorial 6 so let's start with this initially after you have installed the cuda toolkit and dnn okay the next thing that you have to do is that just write pi torch installation and probably initially you need to install this now suppose i showed you in my local you know i have 10.1 right 10.1 i can select the stable version windows conda python this and just execute this one right if i copy this if i go to my anaconda prompt right and probably i can create a new environment also i've already created and created a new environment so i can write activate my environment name is env pytorch and probably if you're following the series of the spy dodge videos i've actually created this then you can just paste this automatically the cuda toolkit will get downloaded specifically to uh pi torch also right if you have not uh in tensorflow i have actually manually downloaded cuda toolkit and i have installed it so if you have not installed tensorflow separately you can just directly write this command and automatically the cuda toolkit will get downloaded so once i'm there what i can do is that now you can see this i'll just remove this and i'll just go inside this write it as python and i can also import torch and check whether it is working fine or not so here you can see that it is working fine now as i told you one important thing about torch is that you can run this in both cpu and gpu okay now let's see how we can check whether it is having cpu or gpu so first of all i import torch okay and i can use this command torch.cuda dot is available so this basically says that whether cuda is available with respect to our gpu whether the gpu is available or not so if i execute this right now it is showing true because obviously i have a gpu over here i have all the configuration installed properly then i can also go and write current torch.cuda dot current device this will actually give me the device id so device id is basically zero device id of the gpu then if i really want to get the device name i can basically write dot cuda dot get device underscore name so once i execute this you'll be able to see that i'm having titan rtx so i'm actually executing this in my workstation so i can definitely see this suppose if i don't give the id what happens then also just by default take whatever id is available over there and it will show you this information but i'll just keep it this because suppose if you have two gpus at that time if you want to execute in another one at that time you can specify your ids also now one important thing uh and this is that let me just restart the kernel because i had created some of the variables okay so i'm going to execute this this is working fine this is working fine now i can also see that is there any memory allocated in the gpu right now okay in my titan rtx right because still i have not executed any program so let's see how much memory is allocated right now it is zero okay uh probably i also have one more function which is also called as torch dot cuda dot uh i think it is caching i guess let me see memory cached yeah so if it shows memory cache also it will be zero because yet i have not executed anything as such now this is very important what i'm doing is that i'll just remove this right now okay i'm just creating a float tensor with three numbers inside this okay like one point zero two point zero and three point zero i hope you know this float tensor i've already taken that in my first or second tutorial in the python so i have written torch dot float tensor one point zero two point zero and three point once i execute this so this is basically my var one let me just write it out where one is basically a tensor a floating point tensor then if i write var1.device okay right now i have just created this float tensor like this right if i go and see where1.device it is showing the device type as cpu because the way that we have actually created by default it goes to it it is basically allocating this whole floating tensor or it is basically allocating uh or it is basically assigning this whole float tensor to the cpu right so that any any any execution that will be handled going forward will be done by the cpu now suppose if i really want to execute this in the gpu how do i do it so here i'll just write it as cuda and execute it now see when i write where1 it is basically showing me a it is a cuda tensor now and if i write where one dot device okay sorry so i have to write cuda with this right so here you can see that now it is showing device cuda huda is a function over here guys which will definitely say that okay i'm i'm creating this flow tensor i will be assigning this for the further execution to the gpu itself right so if i go and right now where one dot device you'll be able to see that the device type is now cuda before the device type was cpu okay this is with respect to all the variables right now you may be also having a question okay chris fine then um what do you think in a and do we need to provide all our x strain y train as the cuda tensors yes right we need to assign those now let's see let's see this was my previous tutorial and this probably will get executed in the cpu okay so i've taken this diabetes.csv uh probably i'll just import it um and then i will also probably i'm not going to use c bond so let me just execute this one and i'll create my independent independent features and this one uh probably i will also execute this one okay now here you can see this right if i am executing this extra in white rain white test i'm converting this into float tensor float tensor and as i told you why i have to convert that into long tensor so if i go and execute this if i go and write below if i write x underscore train dot device and if i execute this you can see that it is a cpu now same code is actually present over here i'm going to just execute that code see this so let's bc okay i'm just executing this okay so this i have got my xtrain and y train now here you can see that what change i am actually making now for the extreme before in this program i did not had dot cuda but here i am just writing dot cuda right so in this pima diabetes i am just writing dot cuda dot cuda dot cuda dot code right so once i execute this this got executed now if i go and search for x underscore train x underscore train dot device and if i execute it now you can see that the device type is cuda initially here the device type was cpu because i did not use dot cuda right this is the first step now when i will be talking about um cnn models probably in the next class of next session after this you know there i'll be using some image like how we have in keras like image data generator similarly we have uh in pytorch also uh similarly a data set loader or image loader i have for that also there is an option which i'll be showing you probably in the next session how you can run that in gpus because i really want to execute some image classification problem and show you the speed how it works in this workstation amazing amazing speed uh it works in a better speed itself so now you can see i've done this now the next step is that okay fine i have all my independent and dependent features right now what about my feed forward neural network can i also make them run in the gpu right we also need to make them run in the gpu right so for that you see over here i have taken this a n model as an example it is a simple feed forward neural network with two layers and one hidden layer okay oh sorry one output layer uh so here you have this output layer here you have two hidden layers and then you are actually applying relu activation function and you are getting the output so if i execute this if i execute this if i go and see model parameters this model parameters is giving me all these particular parameters right now suppose i also execute this again guys i'm just comparing okay so don't worry that don't think that i am skipping anything i'm just comparing this is another program that is another program right so this is basically my model parameters now now this is very very important okay if i write something like this suppose if i go over here if i write something like this or let me write a for loop for r in model dot parameters probably i am writing it right i guess print i okay so this is uh basically my parameter containing tensors uh instead of this let me just use next or just a second okay so i'll just try to use one more technique you know probably this will help is underscore cuda okay i just want to compare whether okay it's cuda sorry is cuda it is a boolean variable now here you can see that whatever layers we have actually created like over here you can see this is my layer this is my layer i'm just checking whether this layers has been assigned to the gpu or not so definitely in this cpu side you see that i've never assigned anything to cpu anything to gpus right here also i've created my a n model i've initialized over here but i have not assigned it to anything so if i really want to check whether this is assigned to cpu or gpu i can basically use this i will traverse through each and every layers each and every parameters that are present in this and model and if i'll write just i'll check this condition by using this is cuda boolean function right so here it is actually telling me false because i'm not using gpu now suppose if i execute this and if i try to put it over here right and if i try to say the same thing now probably okay i'll execute this i'll execute this okay now it is also saying me false right but i told you that this is basically the code that i really want to run it in the gpu so how do i do it i just have to change this one line of code so whatever model object i have created over here i just have to use dot cuda like how we use dot cuda here similarly we have to use dot cuda here right so once i execute this and if i execute now the above one it will be showing us true now this basically says that this whole thing has been assigned to the gpu for the execution purpose now if you really want to see how much memory you have got allocated you will be seeing that this memory memory has got allocated you want to see how many cash how much cash memory has got allocated this much cash memory has got allocated right so now here you can see that how simple in a simple way so i love about pytorch right it is very very much simple the syntax is almost like a oops concept if you are very good at oops you can definitely use this in an amazing way now coming to the running of the model so after this definitely i've assigned my model to the cuda uh basically assigned to the gpu for the further execution um then i'm going to assign the clause function the optimizer this is pretty much simple right with the learning rate as zero one and now you see this i am running for ten thousand epochs and probably guys this whole thing i've explained in the second tutorial only you can just go and check in this pi touch uh sorry pytorch for tutorial right i've actually done this in fourth tutorial okay so just go and see this so once i execute it now see how quick the execution will happen in the gpu if you really want to see whether gpu is getting utilized you can see over here it is getting utilized the cpu is not getting utilized over here so uh it is completely done by gpu and let's see what is the total overall time the total overall time is somewhere on 11.94 and this is just a tabular data for small number of data sets guys pima diabetes is a very very small number of data set but if you are trying to execute some huge records it will probably take at least 15 to 20 not at least one minute let's see like i have i'm running for 10 thousand epochs right so we can compare that now similar thing over here it is false we'll just run this in the cpu for tabular data with small number of records i think but at least this program will take more time when compared to this that is for sure okay let's see and let's learn it so here you can see it's running where did it go okay it's running it's running definitely more than 11 seconds at least now you can see over here the cpu usage also increased right so this in turn shows that it is getting executed in the cpu right see here now it has got down that basically means now the execution is got over so definitely it says 13 seconds here it was 11 seconds i know it is not a very big difference because the number of records are very very less right so just try to execute this and probably this all things are same in the next video i'll be showing you how you can execute this in cnn so i hope you like this particular video i'll be putting this materials in the github uh if you really want to try out if you have any kind of gpus just make sure that follow this tutorial to install cuda toolkit and cu dnn and then use this command if you you can also use this command to install that cuda toolkit but make sure that you have the cuda cnn cuda co dnn uh installed properly okay so i hope you like this particular video i'll see you all in the next video have a great day ahead guys and thank you for all the support please do subscribe to share with all your friends bye
Info
Channel: Krish Naik
Views: 60,230
Rating: undefined out of 5
Keywords: pytorch tutorial, machine learning, deep learning, data science
Id: K8qs9GlE4UQ
Channel Id: undefined
Length: 18min 7sec (1087 seconds)
Published: Mon Aug 24 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.