10x Faster Than NumPy (GPU Magnet Simulation)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] if you've coded in Python before you've probably come across the main python libraries two of the biggest examples of course being numpy and scipy these examples are used extensively in the scientific community and for a good reason they provide many important functions that are used all the time in scientific research but compared to what we're about to explore they're nothing with the right tools you can take a simulation of 400 particles and turn it into a simulation of ten thousand particles but at no extra time cost the secret of course is proper utilization of computer hardware and more specifically the GPU which most people have access to these days welcome to the third episode of the Python GPU series the ising model [Music] if you guys want to support the channel and learn how to code just like me I highly recommend checking out my udemy course python stem Essentials coupon code in the description you can get the course for ten dollars for the next five days foreign so first thing to state is that the code on the left hand side here is the GPU code and the code on the right hand side here is going to be the CPU code so everything here we're going to run with the graphics card will show it's much faster everything here is run with the CPU and numpy and things like that and it'll be considerably slower so a good place to start is to sort of review what exactly the izing model tells you about magnets and so essentially it's a physical model that studies the magnetization of a material as a function of the temperature that that material is in so for example if you have a bar magnet and you heat it up really hot it's going to lose its magnetization so you need a physical theory that describes how the magnetization of something relates to the temperature so the idea behind the icing model is that a magnet consists of a bunch of different atoms each with a different spin it's either spin up or spin down and when the spins get aligned with each other well then you have a net magnetic field moving through the object and you have a magnetized material on the other hand if the spins are all disaligned with each other you don't have consistent magnetic field from each of the atoms pointing in the same direction so there's the Net Zero magnetic field so the spins can be aligned together and as you heat up that material you can provide enough energy to the system for the spins to become misaligned and it the item will lose its magnetization so the question then becomes okay so what temperature does it actually lose its magnetization at so for that essentially we use statistical mechanics so let's do some little review here about statistical mechanics that's a very simple Theory it's it's uh it's actually the icing model in terms of as complicated as theories get isn't actually that difficult but the idea here is you need some required facts so everything here is statistical because you're dealing with large number of atoms so a system is in thermal equilibrium with a temperature bath that means that it's it's exposed to a temperature and it's come to that temperature so you assume that the magnet is at the temperature of its surroundings the probability of being in a state a state is the system so the sort of orientations of the spins or whatever the probability of being in that state is related to the energy of that state so a state with higher energy of course you're going to have less probability of being in that state uh and it turns out that the probability well it's proportional to e to the negative beta times e mu so the state is Mu the higher energy the less probability you are of being in that state and then of course you sum across all the different states the probability is one so you have this normalization Factor Z out front um Z of course just the sum of all possible States is known as the partition function so that's an extremely important concept of in statistical mechanics it's not super important here because it turns out it will cancel out so there's an important notion you have to think of what's Happening Here If the system is at temperature with the surroundings you assume that this system is at equilibrium so what does it mean to be at equilibrium that's actually sort of a difficult question to think about because you could say it's at an equilibrium energy where the energy is constant the state is constant what does it mean to be in equilibrium well the way you define equilibrium and statistical mechanics is related to this equation and we'll sort of get into that equation below with this diagram but basically what what you say holds is that at equilibrium the system is is able to change States the the spins aren't always going to be in the same direction for example maybe they're all a lot of them are pointing in the same direction but every now and then they switch and you could still allow that to be equilibrium but you you sort of enforce the following condition and that's that uh so new is all possible states that the system can go to you say that the probability of being in your state times the probability of going to this other state and you sum those together is equal to the probability of being in these other states and going to your state so as often as it comes out of the state mu it's going back into that state and so you can think of that in terms of this diagram if you think of mu as being the state and the arrows are you know the probability of leaving and then coming back into that state it basically says that you're you're sort of going out of that state as much as you're going back into that state so the the probability p is the probability of a transition so a spin Flips For example that's a transition uh and this is um this condition over a sum is sort of hard to enforce but you can make this statement true if you make it true for every term in the sum so this is called detailed balance and so that detailed balances this equation here and so you can rearrange this equation and you can show that this is true when the ratio of going from one state to another the state to this state is the ratio of the probabilities obviously you can just divide that and because the probabilities we set are equal to this well you just divide this by this the the partition function Z cancels out and you have e to the negative beta e Nu I'll call it V and U just to avoid confusion minus E U so everything here is simple so far you're just saying that the ratio of going from states to another is proportional to the difference in energies of these states and exponential e to the negative beta times the difference of energies so how does the ising model come into this well the ising model tells us exactly what the energy of a given state is so in the ISO model you have a bunch of spins either pointing up or down and uh you know the the energy is related to you know how much of the space spins are pointing in different directions now it turns out that the energy of course for for something to be pointing spin down right there's no preferred energy for if you take an atom on its own to be up or down but it depends on sort of what it's surrounding it so for something to be spin down if everything around it is spin up then that given state is in a state of higher energy because it's sort of opposing the direction of everything else so you you get into the sort of this complicated thing where you know you can't just look at the individual things themselves but you have to look at the neighbors so for each point if you want to quantify the energy of a given point like this spin here you have to look at you know what's going on with its neighbors or the nearest neighbors and that that's where the icing model comes into play uh so you see the total energy so ignore this outer sum over I for now but if I pick a point I so I is the point on the grid so maybe I is this point here the way to determine the energy is you you sum over so ni would be the four nearest neighbors to this point so that would be this point up left down right so ni is the set of all these indices that are here and you sum over all these different points and the energy is proportional to minus J J is a proportionality Factor uh related to the the system so the magnet system times Sigma I times Sigma J so what it means is that if Sigma I and sigma J so remember I is the center point and we're summing over the sigma J's if these are opposed to each other so one is negative and one is positive then you get a positive contribution to the energy whereas if they are the same sign if they're both pointing down or both pointing up you get a negative contribution to the energy so the higher contributions of the energy are where the neighbors are misaligned with the point in the center so this sum you then sum over all the points in the lattice and that gives you the total energy of the lattice so let's just remember what the idea is here we want to take a system of spins and we want to know how are the spins aligned and what's the energy at a given temperature of the system so the idea behind the simulation is as follows you start with a system of random spins but you say it's at a given temperature right and then we're going to do an iterative simulation where you're adjusting spins so that they eventually reach equilibrium so you iterate over and over and over you somehow adjust these spins and eventually they'll reach sort of an equilibrium detailed balance like I said with the equation above here of those spins so the question is well if you start with random spins how do you actually Reach This condition here well it turns out you can mathematically prove that if you enforce this condition at every iteration it will eventually move into that state of detailed balance so let's talk about remember we want this equation to hold and we're going to start with random spins at a given temperature so by the way the temperature is specified by Beta so for different values of beta this is going to look different you're going to get different results in your lattice so basically we're going to change the system to different states new so we're going to go from mu to Nu so we'll we'll perturb the system we'll switch some of the spins and we're going to say well given the probability of going to that state do we move into that state or do we not move into that state at each iteration we'll change a bunch of things and we either move towards that that new state which is V or new here or we stay in the Old State based on the probabilities of course these are the probabilities here so the center of algorithm works as follows you you call your initial State mu so the state you're currently in at the beginning of that iteration uh and then typically and we'll adjust this a little in the video you pick a random particle or point on the lattice so you know a point here and you decide whether or not you're going to flip that or you you flip it and then you decide whether to keep that or go back to the original um and the way you do that is as follows so if e v is greater than EU here so if the energy in the new state is higher well you're moving into a state of higher energy so that's not preferred but it doesn't necessarily mean we're going to reject it you set your probabilities like this you say p Nu to U is one so you can set one of these and then the other one of these is going to be the variable that you solve for so this is equal to one then the probability of going from U to Nu or U to V here is e to the negative beta e v minus EU that just comes from if this is equal to one well then it cancels and you can solve for this directly equal to these energies and that enforces the condition going sort of backwards as well so if e mu is greater than e Nu well then the rule above implies that uh U to or U to V here is equal to one because remember that because remember that we stated here the probability of going from higher energy to lower energy is equal to one so in this case we are going from higher energy to lower energy from new to new so because we set this above here this needs to be true here so we have our probability of changing states we start with initial spins we adjust things and then we accept them with the following probability so we basically form a basic Monte Carlo simulation uh so you do this you you move to a new state and then you keep running this algorithm over and over and then the the theory tells you or a complicated mathematical proof tells you that eventually you reach detailed balance or an equilibrium condition So eventually your spins will sort of get into some state where they're either aligned or misaligned or maybe some combination in the middle and so there's it turns out that there's a critical temperature in the izing model that says at what point the spins are going to be aligned and misaligned okay so let's start comparing these two codes here and actually I'm going to set these both equal to a thousand for now well time how long it takes to run this algorithm for a certain number of iterations so we'll start with the Thousand by thousand grid I'm gonna import all the necessary libraries in each case uh and then we'll run some stuff here so here's n equals a thousand n equals a thousand so the first thing we'll do is we need to generate some initial random grids of spin right because we want to start with some initial State and then we're going to let the algorithm sort of advance and we'll move into the um equilibrium State at that given temperature so the way to do that is I actually create two random grids I create one where 75 of the spins are pointing up to begin and one more 75 are pointing down to begin uh the reason for that by the way is that there's no preferred Direction up or down so if you start with 50 50 spins in the middle it will be random whether they all move to the positive or pointing up spin up plus one or plus down by starting with 75 percent uh the one that's 75 positive well if that's going to move to the all the spins being aligned it will move to them all being aligned uh consistently in the same direction whereas 50 50 it will that initial sort of perturbation will either make it go up or down so to do that in each case of course with numpy you can just use np.random.random so this is just an array of random uh uniform numbers between zero and one and you you use that to generate the lattice so my lattice is just created as a array of zeros and then anywhere on that array where the initial random points are greater than 0.75 the spin is equal to one so only 25 percent of random points on that 2D grid are going to be greater than 0.75 because it's a uniform random number uh and then the same thing less than 0.75 is equal to negative 1. I do the same basically code here with torch uh of course the this creates a n by n grid of points and then I can create my lattices that consist of initial like negative or initial positive points and so once I do that I can basically plot it in each case so this is basically my rid of a thousand different atoms on a sort of grid here in 2D and they're going to be slightly different for numpy and matplotlib obviously there's a different uh random realizations but these are sort of where they're all like most of them are starting off as positive and then this is where most of them are going to be starting off as negative so I have my initial grid we're going to run the Metropolis algorithm and all these spins are going to align in such a way that's related to the temperature of the system if it's high temperature they're going to be misaligned because it the high temperature permits that higher energy state of the system uh whereas if the temperature is lower well you don't have that additional energy in the system so this spins are going to move into a low energy State and become aligned with each other so given this grid of spins here I need some functions to compute some important sort of things here and so the first thing is going to be Computing the energy for a given element in the lattice so for one of these spins what's the energy state of that point and then I can get a 2d grid of the energy corresponding to each point in the lattice so this is the a 2d grid of all the spins but I can get a corresponding 2D grid of the energy of each individual element in that lattice so this array e i divided by J by the way I normalized by this J Factor here and the study of course it depends on the material but we can sort of make this dimensionless here um so that's equal to this sum here so you sum over all the nearest neighbors you pick a point I on the grid somewhere I and then you sum the four nearest neighbors to that point a sigma I Sigma J where Sigma I is either plus one or negative one depending on the spin uh to get the total energy well if I have that 2D grid of all the individual energies well then I can just sum over all the points and get the total energy now it turns out you can prove that the change in energy so for example if I have my 2D array of points here I can get what the change in energy of each individual point would be if I reversed its spin from up to down or down to up so basically just multiplying it by negative one because it's either spin up or spin down so for each of these points I can get what the energy difference would be for that point if I change the spin and I can get another corresponding 2D grid so this again returns another 2D grid and you can prove that if you change the energy of Sigma I of course it's a very easy proof your final energy minus your initial energy is minus two times this so that's what the change in energy for every point on the grid would be if I switch the spin so these are basically just functions to evaluate quantities of Interest so the first function of course is going to be just getting the energy of each point in the grid and you'll notice by the way that I give multiple lattices here whereas I only give a single lattice here one of the advantages with pytorch and GPU is you know if I want to basically evaluate this for many different grids each have their own temperature I can batch all those entries together so rather than having a single 2D plot here I have basically it would be three-dimensional but consisting of a stack of 2D you know initial configurations of Spin and then if I want to do each of those at a different temperature well then I run it all at the same time but we'll get a little bit more into that later um and then the numpy it just takes in a single lattice so it's easy to sort of go through this one first um by the way if I run this uh thing here this kernel this might be the confusing part of this function here and I actually look at this kernel this is the numpy version uh basically what it is is it's a 2d sort of convolutional Kernel you can think of the falses as being zeros and the trues as being ones and it basically gives you access to the nearest neighbors of a point so for example if I convolve an array with zeros and ones like this it basically will sum all these uh points here together the trues so of course one of the things you can do is you can Factor Sigma I outside of this sum and you have minus Sigma I times the sum of nearest neighbors here like that and this kernel if I convolve my grid of points uh the sigma I's with this kernel here that gets me the sum over the nearest neighbors of Sigma J so I have this kernel here and then that's basically what I do when I do lattice times this convolve function I'm basically doing a factor Sigma I outside of here so I get Negative Sigma I times this sum of our nearest neighbors and the sum over nearest neighbors is it is equivalent to convolving this kernel here with the lattice itself I do the same thing here in pi torch so I generate my kernel it's all in numpy right now I convert this to a tensor and you have to be very careful about what's going on here with the dimensionality so you'll notice a bunch of weird things going on here so you know I can generate this kernel the same way uh of course right now it's just going to be a numpy array so if I look at the kernel it's going to have the same shape as on the right hand side here because I've done the same code now if I run this here and then I look at the kernel I get a very different shaped object and it looks sort of confusing but if I look at the shape of it the shape is what's really important um I need to regenerate this that's right through the error so the shape of it is one by one by three by three so in this case the the kernel is three by three in this case I have two additional Dimensions out here and this has to do with how we package things for the pi torch function so in this case we're just using a regular convolve function but we have to be very careful when we're using the convolutions of Pi torch which sort of are built for neural networks so how does the con 2D function work in pytorch which is the one we're importing well of course here you just in the the simple convolved case you give a 2d lattice and a 2d kernel and it will convolve it now with pytorch the arguments are slightly different and you have to pay attention to what the shapes need to be in pytorch the first argument or other in other words the array which you want to take the convolution of it needs to have a shape of batch size so batch size can be like multiple sort of grids on top of each other uh this thing called input channels uh height and width so input channels is really crucial for machine learning in our case each point in space we associate with a single number which is the spin now if you associate it with like Spin and density and all these things you would have multiple channels but because each point in space in this simulation only has a like single number associated with it that's equivalent to having like the number of channels equal to one but it still has to have those Dimensions it has to have batch size by input channels by height by width so if I had multiple layers on top of each other of spins um I couldn't just give like a two by a thousand by a thousand array where two is the batch if I have sort of two things here I also have to stretch it so it has this input channel so it's just going to be a a dimension with one element in it but you need this for the pi torch function uh the second argument the weight has to have this shape here so in this case output channels well you know we're we're taking in the tensor of spins and we're outputting what the energy of that array is and the energy is also a single number so the output channels is also equal to one input channels is equal to one um height and width are they going to be the same and so this is what the shape of the actual kernel that you're convolving with needs to be and that's why I generate these two additional Dimensions so the way I generate that is just by unsqueezing that puts a dimensional at the beginning and then unsqueeze again that puts another dimension at the beginning and that's why I have these dimensions of one and one here because of course this corresponds to the output channels the input channels and then the height and width of the tensor and so that also means when I give my lattices array here it needs to have the the same dimensions that's specified here it needs to have a batch size input channels height and width then I can basically use pytorch's 2D convolution function I feed in the lattices which of course is going to be a tensor of this shape here the kernel which I've constructed above and it will return an array and the array will have the shape batch size out channels height width so before I describe the other functions I think it's useful to take a look at what this actually does so for example here I'm taking my lattice that consists of most spin down 75 down 75 percent spin up and I unsqueeze the First Dimension so if I just do this if I just stack the two lattices so by the way let's let's build up the shape here if I just look at one of it if I just look at a single lattice it's going to be a thousand by a thousand now if I stack them together then I have my two lattices in the same array but I have so I have my batch size height and width but I still need that channel Dimension that I specified above here in order to do that I unsqueeze the array giving it the so the dimension is one so it's going to insert another dimension here of length one and now I have my array of lattices that I can so that I can compute the energy of at the same time uh like this and so I can get my lattices like this and look at the shape and it looks good and if I compute the energy array so this computes the energy of every given point in the lattice and I look at the shape so it takes a little bit and it's going to have the same shape so that's good so once I have my energy of every individual point in the lattice I can get the the total energy of the lattice well the lattice is if I have two lattices so for example all the 75 percent spin down 75 percent spin up I want individual energy measurements for each one of them so for example in this case because I only have a single lattice I can just sum the entire lattice array together but in this case I have to be a little bit careful and actually I can switch it like this to make a little more sense so in this case I'm summing over the channel Dimension well there's only a single Dimension it doesn't matter the height and width Dimensions so when I return this I'll actually get an array of length two corresponding to the energy of each element in the batch and the get Delta energy array well that's going to have the same shape as this array and it's the same sort of equation where I just multiply by negative 2. so I can run all my functions here and when I look at the shape for example of my get energies well I have two different energies for each separate lattice on the right hand side here it's very simple if I get my energy array this is going to be a 2d array very simple numpy and then this is going to be just a single number corresponding to that lattice and this is going to be another 2D array corresponding to what the change in energy of that given point would be if I flipped its spin so now we start to get into the Metropolis algorithm so now that we have all these functions built up to compute the energy of each point compute the total energy of the lattice the Delta energy if I change individual points we can start to implement the Metropolis algorithm uh so let's start with the the CPU case because it's a little bit easier and then we'll move to the GPU case and I'm going to switch to the description of the Metropolis algorithm on the left here uh we'll start with the CPU thing here so we start with our initial array of spins these are just going to be random so in our case remember we started with 75 of them up 25 of them down and we're going to run the Metropolis algorithm and those spins are going to arrange themselves in such a way that you know as I run this iterative algorithm we eventually get what the true spin configuration would be at a given temperature so we start with this spin array um times well this is I shouldn't call it times here I did in my older video what it really is is the number of iterations of the algorithm so because you don't want to have the idea that if I start this system this iterative algorithm isn't like putting a magnet and watching the spins flip like that that's not what this model tells you that it's an iterative algorithm it has nothing to do with time dependence it's only about reaching that final state of the spins so as I iterate through my algorithm well I'm going to do a little trick here because remember I said with number two you pick a random particle on the lattice and you flip it so if I do just one individual thing at a time it's going to take a long time to run this algorithm because I'm going to be having to flip many many many different spins and it takes a while and actually that's how I did it in my initial video 14 you'll see that this is updated this is a much faster way of doing it so the first thing I do is I generate these random numbers this is either going to be random zero or a one 50 chance of being a zero or a one I and J so you might be wondering well what the heck is that well when I compute my Delta energies what I do is I take my spin array so my spin array is going to be that sort of 2D grid and I only index from zero every second element in rows and J every second element in columns so I have my beautiful illustration of paint here so you can imagine that if these are the individual spins in the lattice like this and I'll draw I guess a four by four grid here for example if I was equal to 0 and J was equal to zero then I would be taking the sort of this point here this point here this point here this point here and then eventually like this point this point this point this point if I is equal to one I would instead be taking well I I guess is rho so if I is equal to one I would instead be taking this rows but these would be space the same and if J was equal to one well then I would be taking these columns here like this so every iteration I'm either picking um the the blue here like this I'm picking the purple like this you can imagine a space like that I'm picking the yellows like this or I'm picking the greens like this so I'm considering like all these different spins sort of to apart from each other like this and so the reason why I choose them spaced apart like this is like if I picked all the points in the array and I decided to flip this and I decided to flip this based on the detailed balance condition you run into issues here because these are actually interacting with each other in terms of energy but if I pick them sort of split apart like this then you never mess with the neighbors of the point when you decide to flip that given point in a given iteration so I encourage you to think a little bit about the motivation behind that but basically it means that I I get my grid and I'm only selecting you know certain points for that given iteration so let's actually run through these points above here so I have my eyes I have my Js and I have my uh get Delta e and I'm going to actually feed it in my initial positive lattice so feeding my positive lattice I can look at the shape of d e so d e dot shape well it's going to be sort of half as big as the original lattice so it should be 500 by 500. so it's going to be only those like yellow points for example uh then I want to know okay so given this d e array which is going to have like you know in this case when I flip the spin the energy of this point increases if I flip the spin the energy of that point is going to decrease so then I need to decide whether or not I'm going to keep this change or not so I'm going to keep the flip of for example this yellow point or am I not going to keep the flip of that yellow Point well for that I need to look back at this equation here that the probability of going to this new state well if the if the new energy is positive well then the probability of the flip is going to be e to the negative beta times Delta e so I need to evaluate Delta e for each point of this array which is what I've done and I need to accept it I need to only accept that with this probability so the way to do that is I generate a bunch of uniform random numbers so all of them are between 0 and 1. and if that value is less than this exponential which is sort of this this value here then the change is true so that means you go you keep the spin flip and you stay in that state right so this is for the case of where D E is greater than or equal to zero for cases where D E is less than or equal to zero and then I say you necessarily go to that new state so that's why I add this here this is all Boolean logic so change will be an array consisting of either yes keep the change or no don't keep the change so by the way if I if I give this a right here and I look at this d e and I sort of evaluate this and let's do it for a temperature of 0.5 for example and I look at the form of change so it says okay well this one was positive um we're false we're not going to keep it this one was positive false we're not going to keep it this one was positive false we're not going to keep it so if I increase the temperature remember there's more energy in the system it allows these spins to be in whatever configuration they want so I decrease beta to 0.1 remember beta and temperature inversely related well then some of these points that have positive for example well they just end up to be true so this one had a positive change in energy when I flipped the spin and it's equal to True remember we're dealing with the yellow points on this grid here you can think of it that way and some of the positive points well you're not going to flip but whenever the the change in energy is negative like negative eight here you're necessarily going to flip that spin based on this equation here so this tells me whether or not to flip the spins it's either true or false and then I sort of go through so I index my array at all those sort of yellow points you could think of on the grid here and I uh base whether they're true or false I multiply this spin array by negative one so I flip the spin so I've modified my system then I get the energy for that grid and the average spin from that grid using basically the get energy function for this I just sum all the spins together right and I divide by the number of atoms that there are and that gives me the average spin so in the CPU code I only had a single value of beta J here you're just doing this for a single temperature the reason why I started with this is now we can move to the GPU code which is a little bit more optimized in this case we have a batch of spin tensors so for example I can give six random initializations of Spin and then a different temperature corresponding to each one of those and the advantage of using this convolution 2D function here above uh where was it it was right here and so the advantage of using this conf2d function that I have here is that I can take in a batch size of multiple tensors and I can do sort of multiple things at the same time that's sort of the moral of the story The Lesson of using pi torch is that you're trying to do many things for a batch at the same time you're moving away from for loops and you're sort of thinking in terms of this batch operations on data so I've taken this batch of spin tensors the way that this algorithm will work is I'm going to have a different value of temperature a different value of beta J for each spin tensor so this allows me a lot of opportunities here so maybe I have 75 positive 75 negative spins well then I can give the same value of beta J for each one of those if I want to compare them but another option here is that I have a batch of many grids that are all 75 positive to begin with and I have a different value of beta J for each one of those and then I can see well for all these different temperatures how does it approach equilibrium for each temperature case and get sort of the average energy for example or the average spin as a function of temperature so the code is mostly the same the main thing to note now is the beta j i reshape it that has three extra axes here and you might wonder why I'm doing that well remember where beta J showed up here in the exponential it was a number uh and you have to think about what the shape of my d e is remember that d e is going to have the same shape as the spin tensor batch which is going to have size batch size one height width so my Beta JS they also need to have um well it's going to have batch size this is what the negative one does it automatically inserts the correct Dimension so beta J is a tensor that has two elements in it this thing is going to have two elements by one by one by one so it allows me to sort of operate this beta J tensor multiply it by a tensor that has the same shape here so that's why I include it and that's why I can do this operation here so basically while this one will give you the spin as the function of iteration number for the the single spin array that you're considering this one gives you the average spin as a function of iteration number so for example if there's two thousand iterations two thousand numbers but it also gives you it for each spin tensor in the batch so for example if I run this code here and I look at the shape of my spins spins.shape because there's two things in the batch um there's the 75 positive and 75 negative um then it's going to have this for each one of those whereas here uh you'll note that this took quite a bit longer to run this took about 24.4 seconds as opposed to three uh the other thing you should notice is that here there was only a single lattice and here I was running both lattices at the same time so this one here is easily um greater than 10 times faster about like 15 times faster using the GPU than the CPU for this particular algorithm and I promise you that that will only increase with the number of points that you use on your grid so in this case we use a thousand by a thousand grid as that number goes up the GPU will only be more and more efficient but anyways the thing to notice here is that the spins.shape here is only corresponding to a single lattice so you'll notice that with the GPU code we're trying to do many things at the same time in our code so now we can plot the spins as a function of iteration number and we want to show that they're the same for the CPU and GPU code so in this case I'm looking at the spins corresponding to the 75 up lattice uh and I can do that here as well and sure enough to uh sort of Curves look exactly the same uh I don't know why this says beta J is 0.7 I should really I'm just going to remove the subtitle here this is for beta J is equal to 0.5 here so here I have a tensor with two 0.5s in it if you're confused about that before it's just 0.5 0.5 so of course this had 75 of them pointing upwards and the temperature was not hot enough to have them be in a random configuration so as the iterations went on the spins became aligned with each other and so that means that at the equilibrium while your spins are going to be they're not totally aligned the average spin wasn't equal to one but you know most of them were aligned with each other so now what I'm interested in is okay well here I did it for two different lattices 75 pointing up 75 pointing down but they have the same beta J value uh now I want to do it so that I give it a single lattice so I give it for example a 2d tensor where where I have 75 pointing uh up for example and I want to evaluate what this curve looks like for a bunch of different beta J values a bunch of different temperatures essentially uh so the way to do that is I I take my initial lattice and I repeat it a number of times and so if I take an initial lattice for example if I just look at this for my lattice positive I believe it's called and I look at the shape of this it converts it in such a way that I now have two copies of that same lattice for each different beta J value now if instead I had like 10 different temperatures I wanted to look at you're now creating a tensor that's a much larger size but again it has the dimensionality you need to be run in that column 2D function so now I have this much bigger batch size of lattices and I can compute the average spin average energy and the standard deviation of energies maybe how it varies at equilibrium as a function of these different temperature values here and it will return it so for example I can run this code so now I'm looking at um this is a little bit confusing here but remember that temperature is inversely related to Beta J so I do one divided by this Lin space array here so this is temperatures from one to three we're going to look at 20 different points here and what we'll do is we'll we'll find the the dependence on of you know the average spin versus the temperature for both 75 pointing up so that's sort of this first array here and or that's pointing down and 75 pointing up that's the second array here so we're going to evaluate it for 20 different temperatures in each case uh so you'll note by the way that this uh took 24 seconds to run so if I wanted to evaluate this for 20 different temperatures you could think of how long that would take in this case this runs in 43.5 seconds so we've evaluated it for all these different temperatures and we can look at the average spin as a function of temperature and we get this beautiful curve here so for example when 75 of the spins started negative well at lower temperatures they all align with each other because they have to be in that state of favorable energy when there's no energy added by temperature but as you start increasing the temperature of the system then you have all this excess energy moving into the system so the spins don't necessarily have to be aligned with each other anymore you have sort of heat energy allowing things to go and energetically non-favorable configurations and you have this uh concept of this critical temperature where you move from the spins being uh completely aligned with each other to being completely misaligned with each other so that's like a phase change in fact it is a phase change it's very analogous to what happens when water freezes or melts or turns from liquid to gas you have a phase change here of a magnet being magnetized to being completely unmagnetized which is what you see here I'm going to move into the 3D izing model here now and remember that As you move from 2D to 3D you're going to have a much bigger grid of values and so rather than a thousand by a thousand we'll do uh 150 by 150 by 150 for 3D of course you could increase the size of this but that depends on the sort of memory in your GPU things like that Etc but basically all the code here is the exact same as the above only we're in three dimensions so now I create my initial grid is 3D like before I have to be a little careful with the kernel now it's a 3D kernel where you have up down uh front and back so then I have to add the batch and the channel Dimension to the the kernel and I can use the conf 3D function here to to compute this so it's basically exactly analogous to above I can stack lattices together and of course in this case um lattices.shape we're going to have something that is much different so now I have a 3D grid this is my channel Dimension because each point in that 3D grid has a single number and I have two different lattices here so now that I have my Metropolis algorithm defined I can have the same sort of function as before only in this case I'm in three dimensions and it will take my single lattice it will duplicate it and then evaluate sort of this Metropolis algorithm for each thing at a different temperature and we'll say what does the average spin converge to for different temperatures and that's what this function does here uh so now I'm going to look at a bunch of temperatures between three and five point five again beta J is the inverse to temperature uh put this on my device my GPU and then I will get the average spin uh mean energy and standard deviation of the energy of this 3D grid now for uh each different temperature so we've solved this in a minute and 20 seconds and by the way I want to talk about how impressive this is because I remember doing the icing model in my undergrad and I was doing it just in 2D and I was a new programmer and I was trying to run the simulation and I remember it taking myself and other people in my class like leaving your laptop on and letting it run for a day 12 hours to get all the measurements you need and then when it finished this like this plot that you got here we were only doing it in 2D and it didn't even look this good because we weren't able to simulate that many points it was a very ugly looking plot and that was 2D and in the code in this video with your GPU we're upping to another dimension here and we're actually able to solve within a minute and 20 seconds and by the way according so the 2dising model it turns out you can analytically solve for that critical temperature or the temperature where sort of this switches here in 3D because of you know I don't know why but I'm not that smart to understand but you can't solve it analytically in 3D so the you have to simulate it numerically like we did here and the accepted value for the critical temperature according to this paper is uh beta J is 0.22165 4626. so they do a bunch of simulations with different lattices and you can get a sort of accepted numerical answer for what the temperature is where it switches uh so here I actually put a vertical line corresponding to that temperature here and you can see that we get this beautiful looking plot and look like you could see the the time it takes for this transition here so as you move from sort of three to five point five units of temperature it starts to decrease here and then of course this point where the accepted critical temperature is well it's sort of somewhere in the middle of this decrease here if you made it to the end of this video congratulations I'm impressed that you were able to stand me for 50 minutes rambling on about this stuff uh I hope that the code was useful for your own projects in the future um like in previous videos and people have done this and I really appreciate this as well if you have advice to speed up the code or make it faster please be sure to let me know um join the GitHub or just leave a comment on the video and if you give something that significantly speeds it up I'll be sure to give you credit uh when I update the code anyways thanks for watching uh please like And subscribe join the Discord server as well there's a link in the description and I'll see you next time
Info
Channel: Mr. P Solver
Views: 14,527
Rating: undefined out of 5
Keywords: python, physics, speed up python, speed up python code, how to speed up python, python speed up code, how to speed up python code, gpu, python speed, python speed test, ai physics, blender 2.8 gpu python, accelerate python with gpu, gpu intensive physics simulations in blender, cloth physics - python/pygame devlog #4, cloth physics, physics ai, physics podcast, gamedev physics, quantum physics, machine learning python, ising model, ising model python, metropolis
Id: ykoEwJ7PvNw
Channel Id: undefined
Length: 43min 15sec (2595 seconds)
Published: Sat Apr 29 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.