PyTorch Tutorial 02 - Tensor Basics

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hi everybody and welcome to a new PI touch tutorial in this video we are going to learn how to work with tens of us so how we can create tens O's and some basic operations that we need we will also learn how to convert from Nampa arise to PI touch sensors and vice versa so let's start so in PI torch everything is based on tens of operations from Nampa you probably know arrays and vectors and now in PI torch everything is a tensor so a tensor can have different dimensions so it can be 1d 2d or even 3d or have more dimensions so let's create an empty e10 ZOA so first of all we import torch of course and then we say x equals torch dot empty and then we have to give it a size so for example if you just say 1 then this is like a scalar value so let's print our 10 ZOA so this will print an empty tensor so the value is not initialized yet and now we can change the size so for example if we say 3 here then this is like a 1d vector with three elements so now if you run this we see three items in our ten zone and now we can also make it 2 d so for example let's say the size is 2 by 3 so this is like a 2d matrix so and then I'll run this and of course we can put even more dimensions in it so now it would be 3d and now for example now it would be 40 but now I don't print it anymore because it's hard to see the four dimensions but yeah this is how we can create an empty tenza and we can also for example create a tensor with random values by saying Torche dot Rand and then give it the size so let's say 2 by 2 and let's print our tensor again we can also the same like in numpy we can say torch dot zero so this will put all zeros in it or we can say torch dot one so this will put once in all the items then we can also give it a specific data type so first of all we can have a look at the data type by saying X dot D type so if we run this then we see by default it's a float32 but we can also give it the D type parameter and here we can say for example torch dot in so now it's all integers or we can say torch dot double now it is doubles or we can also say for example float16 just yeah and now if you want to have a look at the size we can do this by saying X dot size and this is a function so we have to use parenthesis so this will print the size of it and we can also construct a tensor from data so for example from a Python list so for example here we can say x equals torch dot ten SAR and then here we put a list with some elements so let's say two point five zero point one and then print our tens on so this is also how we can create a tenza and now let's talk about some basic operations that we can do so let's create two tans us with random values of size 2 by 2 so x and y equal Torche don't rant - bye - so let's print X and let's print Y and yeah so now we can do a simple addition for example by saying set equals x plus y so and now let's print our C so this will do element wise addition so it will add up each of the entries and we could also use set equals torch dot at and then x + y so this would do the same thing now we could also do an in-place addition so for example if we say y dot and then add underscore X and then print Y so this will modify our Y and add all of the elements of X to our Y and by the way in pi torch every function that has a trailing underscore will do an in-place operation so this will modify the variable that it is applied on so yeah so next to addition of course we could also use subtractions so we can say C equals X minus y or this would be the same as C equals torch thought SAP X&Y now if you print C then we can see the element wise subtraction then we can also do a multiplication of each element so this would be torch dot mal and again we can do everything in place by saying Y dot mal underscore X so this would what if by our why and and we can also do elementwise division so this would be torch touch diff and yeah so this is some basic operations that we can do with tensors and then we can also do slicing operations like you are used to from numpy erase so let's say we have a tensor of size let's say five by three and let's print this first and now print X and now for example we can simply all we can get all rows but only one column so let's use slicing so we here use a column for all the rows but only the column zero so let's print the whole tenza one and only this so here we see we have only the first column but all the rows or we can just say for example let's use the row number one but all columns so this would print the second row and all the columns then we can also just get one element so the element at position 1 1 so this would be and this value and by the way right now it prints the tensor and if we have a tensor with only one element we can also say we can call the dot item method so this will get the actual value but be careful you can only use this if you have only one element in your 10 zone so this will get the actual value and yeah now let's talk about reach eight-ten sauce so let's say we have a tensor of size let's say four by four and print our tensor and now if you want to reshape it and we can do this by saying or by calling the view method so we say y equals x dot view and then give it a size so let's say we only want one dimension now so let's print Y so now it's only a one D vector and of course the number of elements must still be the same so here we have four by four so in total it's also 16 values and for example if we don't want to put the dimension or the value in one dimension and we can simply say minus 1 and then specify the other dimension and pi touch will automatically determine the right size for it so now it must be a two by eight tenza so we can also print the size again to have a look at the size so this is size 2 by 8 so it's correctly determined the size if you put a minus 1 here so yeah this is how we can resize tends us and now let's talk about converting from numpy to a torch tensor and vice versa so this is very easy so first of all let's import numpy again or import numpy SNP and I think I have to you know know it's already installed here so let's create a TENS offers so a equals torch dot and let's create a tensor with one society five so let's print our ten ZOA and now if we want to have a numpy array we can simply say B equals a dot numpy and then print B so now we have a numpy array so if we print the type of P then this will see and this will print that we have a numpy and the array so yeah this is how we can create from a tensor to numpy array but now we have to be careful because if the tensor is on the CPU and not the GPU then both objects will share the same memory location so this means that if we change one we will also change the other so for example if we print or if we modify B or a in place by saying a dot at underscore remember all the underscore functions will modify our variable in place and at one so if we add one to each element and now first let's have a look at our a tenza and now let's also have a look at our B numpy array then we see that it also added plus one to each of the elements here because they both point to the same memory location so be careful here and yeah if you do want to do it the other way around so if you have a numpy array in the beginning so let's say a equals numpy once of size five and then print a and now you want to have a torch tensor from a number arrayed and you can say B equals torch and then underscore numpy and then put the numpy array so now we have a tensor and this will yeah by default this will put in the datatype float64 of course you could also specify the datatype here if you want a different data type and now again we have to be careful if we modify one so if we modify for example the numpy array by incrementing each element so now print our numpy array so we see that it incremented each value and if we print B then we see that our tens I got modified too so again be careful here yeah but this happens only if your tensor is on the GPU and this is one thing that we haven't talked about yet because you can also do the operations on the GPU but only if this is available so if you have also installed the CUDA toolkit and you can check that by saying if torch dot CUDA dot is available and so in my case on the Mac it will and this will return false but for example if you are on Windows and you have CUDA available then you can specify your CUDA avail device by saying device equals torch dot device and then say CUDA here and then if you want to create a tensor on the GPU you can do this by saying x equals torch dot once and then for example give it the size and then say device equals device so this will create a Tenzer and put it on the cheap you or you can first create it so simply by saying y equals torch dot once of size five and then you move it to your device to your GPU by saying y equals y dot two and then device this will move it to the device and now if you do an operation for example c equals x plus y then this will be performed on the GPU and might be much faster yeah but now you have to be careful because now if you would call c dot numpy then this would return an error because numpy can only handle cpu 10 sauce so you cannot convert a GPU tensor back to numpy so then again we would have to move it back to the cpu so we can do this by seeing C equals C dot 2 and then as a string CPU so now it would be on the CPU again so yeah this is all the basic operations that I wanted to show you and one more thing a lot of times when a tensor is created for example torch dot once of size 5 then a lot of times you see the argument requires graps equals true so by default this is false and now if we print this then we will also see here in our tensor that it will print requires graph equals true so a lot of times in code you will see this and this will tell pi torch that it will need to calculate the gradients for this tensor later in your optimization steps so whenever this means that whenever you have a variable in your model that you want to optimize then you need the gradient so you need to specify requires grad equals true but yeah we will talk about this more in the next tutorial so I hope you enjoyed this tutorial and if you liked it please subscribe to the channel and see you next time bye
Info
Channel: Python Engineer
Views: 78,822
Rating: undefined out of 5
Keywords: Python, Machine Learning, ML, PyTorch, Deep Learning, DL, Python DL Tutorial, PyTorch Tutorial, PyTorch Course, Neural Net, PyTorch Training, PyTorch Tensor
Id: exaWOE8jvy8
Channel Id: undefined
Length: 18min 28sec (1108 seconds)
Published: Mon Dec 16 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.