Random Matrix Theory And its Applications by Satya Majumdar ( Lecture - 1 )

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so thank you Sanjeev and Abhishek and thanks for the invitation and yeah this is my third time in this school so I really enjoyed it very much and so truly I'm going to talk about random matrix theory and it's certain you know a number of applications and so you know exactly like what Hakeem did in the first lecture I just want to give an overview of the subject and without going into too much details technical details and just to tell you why this subject is interesting and why one should study and what are the different applications and so on and and a little bit you know historical background of the of the random matrix theory so and then from later half today or maybe from tomorrow I will go into the blackboard blackboard only so now before I start these slides I just want to tell you what is random matrix theory so what is matrix so so if you have a let's say and if you don't see very well from the back please let me know I'll try to write as big imagine that you have a n by n matrix so all of you know about matrices right so if I give you a matrix you can diagonalize this matrix denoted X is matrix so if you diagonalize this matrix you will get n eigenvalues and corresponding eigenvectors let's call in lambda int right so for any given matrix you can always diagonalize it and get a noggin values of course in general these eigenvalues are complex numbers if your matrix doesn't have any special symmetry in certain cases if the matrix has some symmetries then this eigen values can be real can anybody tell me when do you get real like values our mission everybody knows quantum mechanics great ok so so you said her mission but there is even simpler one so the complex hermitian is one example real symmetric very good so real symmetric means that the elements of C's are real and you have the X transpose which is equal to X and hermitian means that X dagger which means u transpose and then complex complex conjugate X dagger is X right so if you have such symmetries then your eigenvalues are all real is there any third example anybody knows mm quaternion yes complex chord onions but quad onion is already complex word so I'm not going to talk about quaternion at all but ok it's it's some generalization of complex numbers they call self-dual matrices okay so these are the three classes for which the eigenvalues are known to be real and they are called Dyson's threefold way Tyson is a guy who actually studied quite a lot matrices so far I'm not really telling about random matrices at all I'm just telling you some basic properties of matrices right so so these are sufficient conditions that means if you if your matrix is either real symmetric or complex hermitian or complex Scott and Yannick you will have real eigenvalues guaranteed but of course these are not necessary condition I mean you know there could be other matrices with other symmetry which may have real eigenvalues but but these three are the most well studied cases and they are definitely give rise to real eigenvalues okay and we'll see as as you already pointed out that you know because in quantum mechanics you always deal with or Hamiltonian is always a hermitian matrix which means if you think of these as the energy eigenvalues they're always real so physicists are typically interested in in this kind of complex hermitian matrices so many often quite often we deal with matrices with real eigenvalues but in principle you can have complex eigenvalues as well okay so so this is matrix so now imagine that these elements of this matrix you know there are random variables so for example let's take a real symmetric matrix let's say 2 by 2 for simple whereas two eigenvalues clearly lambda1 lambda2 now imagine that the matrix entries so if you change the matrix entries again you diagonalize you get to different eigen values right every time you diagonalize you get a different set of numbers so for each realization of such matrix gives rise to a certain number of eigenvalues now imagine that these entries themselves are random variables you draw the entries from some distribution how you draw it that will be defined more precisely later but let's take a simple example suppose I say that I draw each entry so there are three independent entries because this guy because of real symmetric so is only this is this three degrees of freedom so suppose I choose them independently okay so that means Joint Distribution of these guys just factorize so this is the ID distribution that Sanjeev talked about this morning now let's take something like this doesn't matter you know whatever numbly Gaussian we will see later that why I choose this factor to be one here and half in the other two places I'll explain that later okay so with some normalization constant that means you know yes it's very easy to generate such a matrix right in your computer you just choose this guy from a Gaussian distribution this guy from Gaussian distributions is great from Gaussian distribution for each realization of your matrix you diagonalize you get two eigenvalues and these eigenvalues therefore will change from one realization of your matrix to another relational matrix right so there are also random variables so so random matrix means that the entries are random variables in general drawn from you know such Joint Distribution which need not factorize like this it will factorize if they are independent but in general you can choose from some Joint Distribution of the entries and then once you choose your matrix then you diagonalize you get let's say for the real symmetric matrix you get n eigenvalues which are also then random variables so the whole you know game of random matrix theory is to understand these spectral properties that means to understand these statistics of this eigen values how are these eigenvalues distributed okay so spectral properties of eigenvalues given the Joint Distribution of the entries so this is the sort of central problem in random matrix theory RMD means random matrix theory okay then you can ask you know many hyperfine questions like you know you can ask what's the distribution of the largest eigenvalue or what's the distribution between the gap between the eigenvalues how many eigenvalues are there in a given interval so many different observables we can ask so the genetic problem is this you give me the Joint Distribution of the entries and I want to find out some properties of these eigen values which are also random variables so this is the central problem so now I want to tell you why we are interested in a little bit history about this problem and the amazing thing is that this problem you know very easy to state but it's not always easy to solve as we will see and and it has amazing number of applications you know it's it goes back almost hundred years but even today I mean we keep on finding new and new applications and I'll try to explain to you why why so many applications of this basic problem okay so let's start with a little bit of history when did the random matrix Theory first you know come about in the scientific literature so this is again you remember this morning walking talked about Patrick Matthew who in 1831 was it 1831 in Edinburgh so I want to show you another idea back connection but this is 100 years later okay so first appearance of random metric series was this by this gentleman called John Wishard he was a statistician at Edinburgh University and as you see this paper first appeared in 1928 which is about 100 years later than patrick matthew and 100 years back from now and he appeared in a biology journal oiled biology statistics journal called biometrika and he was interested in a here station so he was interested in a very simple question so what is the question this is much before physics okay so this question he was interested in just can be formulated as follows see imagine that a simple example see imagine that you have a class like this and you have you know let's say two subjects physics and maths and this is let's say you have three students okay and this is the table 3 by 2 which is at this score of an exam let's say so somebody got x1 one in physics x1 2 in maths student number one student number two got X 2 1 X 2 2 X 2 1 X 3 T so in general it'll be some M cross n rectangular matrix now so the question that that we should was interested in is that you know is there I mean so this is me in a given exam when you look at the exam papers you look at this course and he was asking himself is there any sort of correlation between the physics and maths course if somebody has done well in physics does it necessarily mean that she or he will do well in maths alright a versa ok so so how do you you know test this ok so true they do that so what you do is you just take the transpose of this matrix which is just you know 2 by 3 so in general to be n by M and then you take this product X transpose X this is simple exercise you can just compute the product so this will be a 2 by 2 matrix in general it will be n by n matrix and what are the matrix elements so this is X 1 1 square X 2 1 square X 3 1 Square and this is this and so on so this is a real symmetric matrix because the scores are real ok so now what is this object here so in your sample this is a very small sample right so X 1 1 square plus X 2 1 square plus X 3 1 square so well in that you know the mean is 0 of this this you know you're subtracted of the mean so then this is if you just divide by overall factor 3 this is just the variance of the physics course right there are three students so this is the variance Sanji but already defined variance this morning so this is just the variance of the physics course and this diagonal element is the variance of the maths course right X 1 2 square plus X 2 2 square X 2 T squared and what about the off diagonal element so this is just X 1 1 times X 1 2 plus X 2 1 times X 2 2 plus X 3 1 times X 3 2 so this is just the covariance that is you know the correlation between between physics and math scores right again you have to divide by overall factor 3 but that goes outside the matrix so it's not important and this is just a symmetry counterpart of that so this is called the in general covariance matrix because it encodes the fluctuations of the scores in your sample so the diagonal elements are just just a variance of different subjects and the off diagonal elements give you the correlation between let's say physics and math scores so if you diagonalize these eigen values with this matrix this covariance matrix in this case you will get two eigen values right so imagine that okay you get two eigen values and then you want to know that okay I mean what does it tell us about the correlation between physics and math scores okay so imagine that I just give you an example okay imagine that this is the physics course and this is the maths course so you take your particular sample right so let's say you have three students so three students so so the first guy has got score X 1 1 X 1 2 so that's just a point in this plane right it's called the scatter plot let's say this guy got this so in your / - 1 particular sample you might get some suppose you get something like this so this is student number 1 so this is x coordinate is the physics score and y coordinate is the math score this is let's say student number 2 physics core math score sorry kids course math score is number two and maybe number three is here again physics and math scores so this scatter plot gives you three points like this now if you stare at it I mean if I ask you what do you think I mean I think maths and physics scores are strongly correlated or not you will say yes right I mean it's almost looks like they're in a line right so that means if physics core is big math score is also big okay so if you are in this situation then you see immediately from your picture that they are very strongly correlated on the other hand you know if you have scores like this you can't see anything and they're totally scattered so now this of course you can figure it out because if you take this particular sample and if you just calculate the eigen values of lambda 1 and lambda 2 if your scores were like this if you diagonalize so what does diagonalizing mean mean meaning that you you just you know just rotate your axis right so this will be let's say lambda one eigenvector and this will be lambda two eigenvector so what this situation if your sample was like that this particular scatterplot would indicate that there's a lot of fluctuation see once you diagonalize you get a diagonal matrix right so in lambda 1 lambda 2 0 0 so this is like the fluctuations because overall covariance matrix and so this is like the fluctuations in the direction 1 and is it fluctuations in the direction - so if it was like that you see there's a lot of fluctuation in direction 1 whereas there is hardly any fluctuation in direction - so this means that in such a particular sample if you diagonalize you must find lambda 1 much bigger than lambda 2 whereas here if your sample was like that if you again diagonalized you will get you know the you see this is lambda 1 lambda 2 or pentacle direction so both sides there is a lot of scattering so basically in this case lambda 1 probably of the lambda2 so you cannot say much so the eigenvalues tell you something about the you know the fluctuations and so you want to know that so for example this is at the heart of what is called the principal component analysis which is one of the men you know tool to analyze the big data which is these days is a big big hot subject if you like what does principal component analysis means so it's called PCA so PCA says that you know typically in a different situation like this when one of the eigenvalue is let's say okay this is simple example you have two eigenvalues in general if your data is complex you have n eigen values so if one of the eigenvalues is much bigger than the other eigenvalues hey that means they're all lined up in that direction then you know you can basically ignore you can ignore the flux you want to reduce your data right your big data you want to reduce it so what you can do is you can true throw away all the eigen values are eigenvectors in the perpendicular direction and keep only this principal component that's the sort of meaning of principal component analysis so what people do in big data analysis so they are this data they construct a covariance matrix look at the eigen values and then they just keep the principal component and throw away all the other guys okay so of course this is going to be good approximation if you are in this situation if you are in this situation it's not going to be good approximation but this is the Prince made a main way to reduce you know data big data essentially so this is a very important application that which which sort of goes back to this wizard in 1928 anyway coming back to our problem now okay so I realize my sample and I get these two eigen values and I have to poke a by looking at these pictures I have I can decide whether they are correlated or not now the thing is that even okay so now you are in a given sample right so you can look at the eigenvalues and eigenvectors but then you can say that okay imagine that there was really no correlation between them they are completely independent iid variables okay imagine that these guys they are totally independent of each other so in that case this matrix these elements are also random variables okay they are correlated of course but here they're independent so this guy if I look at it even four completely uncorrelated variables it has some structure and I can get the eigenvalues and eigenvectors okay so so even though when there is no correlation there is still some structure in the eigenvalues and so therefore if I want to really you know in my given experiment given sample I want to you know I want to sort of compare it with the case when there is no correlation okay so this is what statisticians called null model this is something to contrast basically so so first what you should do is to figure out what is the eigenvalue and eigenvector structure for a completely uncorrelated scores so you have that and then when you do a really mean new experiment in your class you can compare the new eigenvalues with this completely random model okay so as a and use that as a null model to decide whether your particular sample whether the scores are correlated or not okay so therefore so what we sure did was to introduce random covariance matrix as a null model to four principal component analysis okay so this is the original motivation of random matrix when it came in so he didn't solve anything he just said that okay it will be interesting as a null model to study the case when the matrix entries are completely uncorrelated and then what can you tell me about the eigenvalues and eigenvectors of such matrices okay so this was we shot in 1928 and if we put the you know make the clock forward then in physics the Animatrix theory was essentially introduced by Wigner and amongst his many contributions but this was one of the principal contributions for which a nobel prize so what Wigner was interested was in nuclear physics so he wrote these you know famous papers in in the Animatrix theory around the 1940s i don't know which year was this and so but i just wanted to show you this picture here that so what you see here is a graph so you know if you go to an experimentalist nuclear experiment spectroscopy they tell you the eigenvalues the spectrum so what I'm plotting here is just the you know energy levels of a nucleus so you have a let's say energy at1 have an energy Rd 2 3 maybe 4 5 etc so just the energy levels this is the direction of energy so let's say this is thousand millivolt this is I don't know 2000 millivolt and so on so nuclear experimentalist can give you this these energy levels and so what he was interested in and they can measure the distribution of the spacing between these energy levels successive energy levels and you can actually scale it by the mean distance between them and so this is the plot that you see if you plot them you will see that the sum of the spacing distribution goes to zero as as X goes to zero and it has a typical curve like that so sweetener was interested in finding a theory for this curve I will show you pictures in minute but so historically so then you know so he was giving this talk somewhere and in I don't know where what where was this conference so it's a conference of Neutron spectroscopy and so he's chairman of this conference was this guy heavens and so witnesses in his talk that then as I mentioned and this was known a long time ago by Wishard and then he talks about the eigenvalues of a random matrix okay and then you know at the end of this discussion session so this guy havens asked him there where does one find about a Wishart distribution so so he said what does one find out what we said distribution so ignorance and we share distribution is given in SS Wilks book about statistics and I found it just by accident okay so accidents happen in science and this is very important such accidents because you see this is already shows demonstrates the power of interdisciplinarity in science I mean they know you can work in very different subjects but there is always a science always tells us something common between different developments okay and it was extremely thanks you know thanks to this accident that that winner could essentially solve nuclear physics problem and and so that that and in fact I mean this subject is still living because of because of Whitner and I'll come to that in a minute okay so what was as I said what was the problem that would never was interested in so imagine that you have a heavy nuclei like uranium 238 or thorium 232 so this is just a schematic picture of the atom nucleus so we have inside the nucleus you have neutrons protons and so on okay so as I said the experimentalist can measure this spectrum so this is the energy so these are the swatter spectrum in so it simply means that whatever the Hamiltonian is here it's em anybody complicated Hamiltonian if you solve this rod injury equation and find out the eigenvalues so they'll be they'll be given by this cuz they'll be alike in values so this is for uranium and this is for thorium okay so they look different so who cares okay but then you know ignore when he was talking to his experimentalist friend he sort of realized something that even though they the details of the spectrum look very different from each other but you see that if you somehow you know these statistics and statistical fluctuations of these this amount there could be some similarity between them meaning that that so what he suggested to his experimentalist friend is to look at this spacing between you know just nearly stable energy levels and of course you have to you know scale it by the mean distance okay so that means you know you just look at the let's say you look at this a 5 minus 3 4 and then 4 minus Z 3 all these spacings okay so these are Delta let's call it ok Delta 1 is equal minus Z 1 Delta 2 this - we do etcetera throughout these all these deltas and then you can actually go and draw a histogram of these deltas right so so there'll be histogram of these deltas and what you should do is to is to scale it so there will be average Delta if you scale it so then you what would you expect is you know something of this some scaling function like this I'm P tilde off let me call this P tilde and this is P Delta over Delta average so we just divide by the average spacing so that the mean spacing is of order one if I want to compare between uranium and thorium the mean spacing of uranium is going to be quite different from thorium so if I want to compare between them I want to do some scaling then I should divide by the mean spacing and if you do the means divided by the mean spacing you'll get a function P of X and then if you plot for uranium and thorium then you see that they both you know fall onto the same curve and this is this curve P of X versus X so I didn't show you the experimental data but you can find out any standard book that uranium Trillium they all all fall into the same curve so there's a universal function scaling function that describes the spacing distribution between different atoms even though the details are very different because you see that you know so so then witnesses to told his friends that look I mean I mean if you guys try to find out this function it's a nightmare because you know you don't even know what is the Hamiltonian of uranium right I mean neutrons protons there are all kinds of interactions god knows what there is okay you don't even know the Hamiltonian and then you want to solve this ordinary equation for this Hamiltonian right we say many-body Hamiltonian and you want to solve the many-body wave function to get the energy levels okay this is a formidable task I don't even know what the Hamiltonian means and I don't know how to generally solve the Schrodinger equation so to get the exact spectrum is a nightmare so but he says that look I mean you don't need to get the exact spectrum if you are interested only in the sort of Universal scaling function there must be some other simpler theory right so what he says he thought is that that you know maybe what happens here is that the Hamiltonian is so complicated so Hamiltonian you know if you any Hamiltonian you know they say in if you write in any basis it'll be an N by n matrix right so he says okay your Hamiltonian is very complex is a complex n by n matrix you don't even know what this is so why don't you just take the elements to be completely random and see what it gives so you take your Hamiltonian which is very complicated you don't know anything about it you just know that is this symmetric it has to be our mission and so on but so why don't you just compare with a completely random hermitian matrix and look at the eigenvalues of that matrix and maybe for that matrix you can compute this and then you know check your experimental result with that prediction and amazingly it worked it worked beautifully that you know ok these days you can compute these exactly this function for for n by n complex hermitian matrices but we didn't know how to compute that he actually did it just for two-by-two matrix right we could do it by hand and what even 2 by 2 and n by n it just didn't give her that much and you know it really worked very well so in some easing so this is the first success of of random matrix theory that somehow so the main thing that you learn from this is that if your system has a matrix description like like a nuclei has and you know if it is too complex then maybe something simpler happens when it's too complex you you just you know replace you will close your eyes and replace it by a completely random matrix and see what it gives I mean it's not guaranteed to work all the time we don't know if it works all the time but chances are that it might work if we absence of any other theory this is the best theory that you can have right and so this is the sort of Wigner's dogma that that is you know no matter what how complicated your system is if the system is large and you know sufficiently complex that somehow there is a randomization which that occurs and which you know leads to a much simpler description in terms of a completely random matrix okay so this is the sort of central sort of dogma of random matrix theory and so after that you know a lot of people who worked on it you know Dyson go-go Dominator and many other people and is still going on very much and in fact because of this central thing that this goes beyond much beyond nuclear physics I mean you know random matrix theory now has many many applications let me just show you some lists here so as I said it started with nuclear physics and then quantum chaos disorder and localization I don't know this is in QCD string theory gauge Theory matrix models cold atoms which I'll talk about in a minute this is in physics so all these problems you can apply the Animatrix Theory quite successfully in mathematics you have the Riemann zeta-function in number theory then determinantal point process about which I'll talk about quite a lot in these lectures integral systems in statistics I already talked about principal component analysis data compression and you know this data reduction I told you our data compression is very important these days because when you have a complicated data you know you want to send it by your computer you have to compress the data and again principal component analysis is very important there you want to reduce when you do your zip in your file this is exactly what you do you compress your data somehow okay and and then you know information theory signal processing wireless communications your cell phone that use again there's a kind of matrix description I mean you know you're all the people are trying to call and then the calls are being too I'll come back to that reroute it to other other guys and so essentially you can the input vector there's an output vector and there are the matrix connecting them and if you don't know anything about them and again you can close your eyes and just replace it by random matrix in biology and economics and finance biology think joking is going to he's not going to talk about random matrix but there is also random matrix quite a large applications and you can find many of this in these modern applications in this book which appeared in 2011 so let me just I'm not going to the details of all these things I mean I'll mostly focus later on cold atoms and I'll develop the theory of random matrices using this cold atom things because that's something that you have been working on quite a lot these days but let me just you know just tell you a little bit about you know few things like quantum cures for example yes yeah Hamiltonian is I don't even know the I'm Antonio how does it know you doesn't you don't know whether it works that's what I'm trying to tell you you have to try you don't have any other theory if you come up come up with a bit of theory I'm willing to buy it okay but you don't have any theory so what witness said that in that case just replace it by random matrix and see what happens okay and amazingly it works okay because as I said you know the basic you know rationale is that when something is very very complex some of these randomization which works at the end okay so you'll see I mean you know completely disparate subjects you know random matrix theory appears there so just let me show you this will tell you about this example of quantum chaos again today I'm not going to giving to any details I mean this is just to a world you talk so details if you have any detailed question you have to wait for the next lecture okay so I just told you what the problem is and I'm just telling you it's applications and a little bit historical perspective okay so quantum chaos again without going into the details so everybody or not everybody I don't know how many people here know about classical chaos you've read something about it at least right yes okay actually some of you know so hereis means what I mean okay see if I just summarized you know chaos in classical kills in in a simple example is the following so simplest way to characterize classical kills is imagine that you have a you know some particle which is moving according to Newton's laws okay so for example suppose I take a rectangle I have one just single particle and I I just start this particle with some initial velocity and then this particle moves with this velocity Newton's law so it moves to the constant velocity goes like this it gets reflected there goes this way gets reflected goes that way and so on so this is a trajectory a classical trajectory of the particle right so now if I give you another particle so this is particle number one and if I give me a second particle and suppose I say that initial condition differs only very slightly so just go into a slightly epsilon distance from here and also in the velocity just or maybe the same velocity or just put in slight difference in the initial condition okay and then again if you follow the trajectory with a slight initial condition but the equation of motion is exactly same for these two right so you will find the second trajectory remains very close to the first trajectory always at all times as long as you are in the rectangle you know the trajectories they stay closer together okay so in other words I mean x1 okay in some notation basically X 1 t minus X 2 T you know the distance between these two trajectories it stays the water one basically no it doesn't increase with time so you make a slight change in the initial condition and you ask at a later time T do they differ very much the answer is no if they're in the rectangle okay but now you do a little change in your problem you just round this corners okay so such an object is called rectangular billiard like a billiard table now I will do the same experiment you take the first guy one trajectory and you look at the second trajectory starting very close to the first trajectory but now you'll see after many many reflections after a long time T that their difference no longer stays bounded but increases exponentially with time with some exponent let's call it lambda L is usually called the Lyapunov exponent okay so so this is the definite characteristic signature of chaos chaos means that a Lyapunov exponent which is positive that means if you make a slight change in the initial condition and the trajectory is drivers from one another okay so this is the classical chaos essentially okay so essentially basically because of the rounding of the corners so when when the two guys come closer here you know these they they start diverging from each other going away from each other and this thing expands in time so now you can ask what is quantum chaos so how is it classically you have a prescription to detect chaos right so you just start these two trajectories together and see if they go from each other or not if they don't if they stay together it's easy is it you know normal system not non chaotic system regular system but if the diverge then it's the chaotic system now quantum mechanically I don't have the concept of trajectories right quantum mechanics only tells me that look I have a single particle inside this geometry rectangle or billiard I have a single particle which is a free particle and I just want to solve the Schrodinger equation inside this this boundary domain with appropriate boundary conditions which you can take to be Dirac state condition that sy is equal to 0 on the boundary ok so how do I know I mean classically you see that there is a rectangle and there is a billiard you know slight change of geometry makes the system chaotic from regular to chaotic if I do the same thing you know if I take a rectangle and solve ordinary equation and then I take the billiard and again solves ordinary equation what difference do I see so how do I know this system is quantum mechanically chaotic or not so in fact so what people did is to look at the spectrum again that is look at the energy eigenvalues you solve sorting your equation and look at the energy eigenvalues and you look at the separation precisely this Delta I defined you know the separation between nearest neighbor eigenvalues and you want to just look at this Delta so what happens is that when if you take the rectangle for which you know that the classically the system is regular so quantum mechanically you find that this is just a Poisson distribution so take any regular geometry where you know that the classical system is non chaotic corresponding quantum mechanical problem the signature is in the spectrum you find it's always possible exponentially distributed spectrum that means if you plot P of Delta versus Delta it's just the exponential distribution but if you do solve the same problem in a billiard which of course you cannot solve it exactly but you can do it numerically then you see this is a rectangle please call the integral system because you can solve the problem with certainty equation exactly in this case but for the billiard you can do it numerically only and you find that this guy roughly does exactly like what we saw in the inner distribution of this stacked ok so it's it's a function which goes like this in fact it turns out that this is exactly same again as the eigenvalues SPECT eigenvalue spacing distribution of a random matrix Gaussian random matrix ok so again I mean you don't see here where is the matrix where is the random you know again the rationale is that you know this problem when you it's a billiard you cannot solve this linear equation because it's very complicated problem ok but somehow the boundary condition here makes the effective Hamiltonian kind of a random Hamiltonian and if you you know if you if you look at this picture it looks like the that of the random matrix ok so this this fact it says all seen numerically and then now it has been verified in many many different systems and this is taken as a signature of quantum chaos ok so this is called bougas Giannone Schmitt conjecture or bgs conjecture what this conductor says is that if your system is classically irregular then quantum mechanically you will find the spectrum to have a exponential distribution is also called sometimes called the very kidding conduct to the first one where as VG's conjecture says that if your system is classically chaotic like in billiard then quantum mechanically you always get the random matrix random Gaussian matrix spectrum ok so this is this is sort of you know another the whole subject of quantum kills then essentially developed out of the random matrix theory I mean just you wanted to show you this example ok yeah so you just solve this ordinary equation but no no distribution we just Georgia history we go to very high energy level okay and raw histogram you go I don't take a big window take L window size look at the histogram okay and then you take the end you know just slide this window and you can plot the distribution of p abdel trent my distribution i mean is this is you know you have these big points you know many points like that right you take here a big window and you look at the these guys you find out how many spacings are there of length 1 how many spacings of length two how many spacings of length 3 etc yeah I can values this is if this is a fixed point line yeah I'm just sampling I'm just sampling the distance between the eigen values okay and then I can plot it distribution by taking a big window size and then sliding the window science in them I'm different sub sub-samples basically yes chaos chaos means essentially that that is not regular right so it's a because because of this billiard structure you know it gives the the actually it satisfies Newton's equation but the thing is this gets chaotic meaning that you know it has multiple scattering so much of them that you know you you can't really follow the deterministic trajectory basically okay yes quantum mechanically nobody knows that's something how do you quantum mechanically we don't have a problem notion of trajectory right so how would I did I detect quantum chaos so what these guys are telling you that for that you have to look at the spectrum if the spectrum looks like the random matrix spectrum you know the system is chaotic that's the conjecture yes yes right there are some measures but I'm not going to the details of that basically but all I'm trying to say is that you know the quantum mechanically the I mean essentially the the spectrum you should look at this spectrum quantum chaos theory of quantum care says the twist to detect this signature of chaos you should look at the spectrum and if the spectrum looks like that of a random matrix okay then you know that this is a chaotic system that's the conjecture of business okay okay so so this is just one example okay so the second example I mean just to just to continue with this this examples I mean okay let me keep this society I mean I insist again this is just you know I'm not defining in detail yet all the different definitions and everything just to give you this is just to overview I mean just to tell you that why quantum mechanics I'm sorry nanometric sorry appears in different places so again I will not give you details but all of you must have heard about demands it a function this is the famous is defined as by the series yeah yes well that depends on what what what is your dimension of your Hamiltonian right no no if you for example if you take a just Anderson model or lattice model of insights okay I mean you have to determine what is your state space I mean you have to determine what is their Hamiltonian near certain bases yes I mean serve the matrix yeah if you have if you're just in particle okay so in particle they'll be n by n matrix essentially right I've not told you the weakness are Wigner Wigner I mean basically he conjectured with no conjecture that if you look at the spacing distribution of a completely random Gaussian matrix that this describes pretty well the scaling distribution of all these nuclear physics problems higher dimensions see nobody knows nobody knows when into are there's no criteria for random matrix fail to work okay there's nobody has to tell you gives you a criteria but I mean look the main point is that you know in the absence of any theory this is the first approximation you will try and the chances are that it might work actually because you know when the system is sufficiently complex somehow there is some kind of randomization that goes on okay this is the main idea this is the idea you have to keep in mind okay because usually this random matrix problems are not always easy as we will see during the lectures that you know they're quite different but tough problem actually so zeta function is this is defined for Z bigger than one and then Z less than one you can do a analytic continuation but the famous Riemann hypothesis which says okay zero function as this is the you know complex Z plane and you look at the zeroes of Zeta functions in the complex plane so there are some trivial zeroes on the so this is the Z planar 0 here so these poles here at minus 2 minus 4 minus 6 etc so this is defined for Z strictly bigger than 1 and for Z less than 1 you have to analytically continues okay so there are some trivial zeros of this data function this is very easy to prove that they on these negative poles and even integers is always 0 and the riemann conjecture Riemann hypothesis says that all the non-trivial zeros they lie on this imaginary axis vertical axis with real part half so this is the point at X equal to half and so all this there infinite number of poles on this line imaginary axis with all the human hypotheses this is one of the most coveted problem to solve in mathematics you will get $1,000,000 immediately if you can prove Riemann hypothesis clay foundation price so many people have tried and you know it's still you know still open problem so human conjectured that our hypothesis that all the non-trivial zeros they lie on these single imaginary axis half plus whatever something twice n times I weigh in is the location of the NH zero in the vertical axis and so there are many things have many nice properties of these things are known and there's a guy called Andrew disco at Bell Labs used to be envelops we actually developed a computer special computer program to find these zeros and here's the world record I mean he has gone up to 17 billion zeros and they all still you know on the on the imaginary line okay there is no exception so far okay so everybody believes that emond happens is true but proving it's very hard ok so while you cannot prove me when I but this is what people try to do is to you know understand again these statistics of these things ok so for example if you go very high up ok along this imaginary axis and you look at these zeros and again you look at the spacing between successive zeros call it Delta 1 Delta 2 etc and once again you take a sample in a big window and just draw a histogram of these deltas and and what you find and this ok there's a lot of work by Keating sniped on go buddy and many others people and what people now conjecture is that if you go quite high up and look at the spacing between successive zeros along the vertical axis I mean this is not proving Lehman abacuses this is just saying that okay I mean assuming that the Riemann hypothesis is true what can we say about the location of these zeros you know these this ans right and again the spacing distribution it turns out that once you appropriately scale and Center that this is given by the Animatrix answer Gaussian random attrex G we okay I didn't define yet so gaussian random matrices that means you take a Gaussian you know independent Gaussian entries random matrix to find out the eigenvalues and you look at the spacing distribution between successive eigenvalues and some of these zeros of zeta function their spacing distribution seems to be exactly again given by RMT so again you know you don't see any matrix here right it's just somehow you know there's no matrix appearance here but you see somehow indirectly the random matrix appears in in these problems I mean again I mean you don't try to even ask me a question that why why random matrix because I don't know nobody knows okay but this is just observation that random matrix theory seems to appear here so basically it's a point process so you have these points random points and again for some magical magical reason that are empty appears again so this is just to show that are empty appears in many many different problems some of the problems we understand why it appears but some of the problems we don't know I mean you see the range from from you know gauge theory to you know these polymer problems to to string theory to information theory all these things so it's just you know huge vast amount of subjects which somehow have connections to random matrix theory okay so this is the sort of great thing about about our empty yeah what about the distribution of the zeros on this this kind of uniform re no the thing is that local probable properties are not the same okay only local properties I mean if you look at the actual even the people have now computed not just a spacing distribution you look at the correlation function you know the endpoint correlation function it seems to be identical yes so so some of these point process is behaving like the eigen values of a random Gaussian unitary matrix but nobody can prove it okay so I mean I'm just telling you this place placing distribution because that's how originally it was found but now they can actually calculate this this is Joint Distribution of these points okay a measure basically numerically and they are exactly identical to to the G matrix basically okay so what I'll do is next and then I know this is again this is overview talk so next I want to just tell you that you know one recent application which is this old atom physics because this is I find it quite fascinating this is a new addition to this list and so let me just you know this next half an hour just tell you a little bit about old atom physics it's so cold atoms so what is cold atoms so I mean all of you have heard about cold atom systems I mean this is a fast very hot area and physics these days due to experimental physics and so so as you know you know there's a you know great sort of experimental advance in the cold atom system so these are basically you know systems with the optical laser trap which can you know confine the particles in a finite region in space and and this experiment lists there are you know mostly atomic physicists and optical physicists so they can actually do a lot of manipulation okay so so essentially the system consists of atoms in a confining laser trap I'll show you a picture in a minute so they can constantly you know fabricate the system in different spatial dimensions 1 2 3 etcetera they can tune the interaction between atoms okay they can tune temperature and also the number of so a lot of variability in these experiments a controlled variability I mean essentially you know this subject has got three Nobel prizes in the last twenty years and the reason is because they can fabricate essentially you give them any Hamiltonian see I mean if you if you work on as theorists if you work on a Hamiltonian there's no guarantee that you will find this Hamiltonian in any coordinate system often you don't find ok but in cold atom system actually they can fabricate the Hamiltonian for you if you I mean often I mean if you if you give them okay if you tell them to look I have a Hamiltonian which is like this then you find the system and the most chances are they will find the system actually which does this ok they can basically manipulate these atoms in a very many different ways ok in particular it is essentially quantum simulation if you like yes if you want to tell now now the important thing is that this the interaction between the atoms are also tunable in particular they can probe the system in the evilly the non interacting limit and this non interacting limit is actually already quite interesting because you know even though there are there is no direct interaction between the atoms in this non interacting limit but you still because these are quantum systems for quantum statistics play an important role so for instance I mean if you have a bosons you can have bose-einstein condensation which means that lot of bosons might convince onto the ground state or if your fermions which tells you that you know there is a power exclusion principle shell so that true fermions cannot sit on the same place or in the same state so this this constraints give rise to you know non-trivial spatial correlations between these atoms even in the absence of interaction just purely just the quantum effects quantum statistical effects ok so so this has been proved experimentally Busan's and condensation of course was known for a very very long time but you know experimentally it was observed only in the early 2000 and similarly for fermions there are a lot of things you can do and as jocking was saying this morning L 2 I want to show you okay so this is the sort of typical common feature of these experiments is that you have a sort of confining trap here and you have some atoms here so this is a have some Hamiltonian so this is the this trap what does it do it you know it breaks the translational invariance right so the system is now confined in a region it cannot go very far because of this external potential which is trying to push it to the center and so it's it's a confined region of space and and you can ask you know what are these sort of correlations and you know other properties of this many-body system okay and yeah as I said this morning Drock you mentioned that that you can you know see atoms in scanning tunneling microscopy here you actually see this is the this is called the quantum gas microscope which was developed by the Griner group in Harvard quite recently and you can actually these are these are the images these blue ones these white ones white blue whatever I mean this is just a two-dimensional atoms basically fermions this lithium-6 atoms and they are in a trap so they're in a confined region okay and you can take the images pictures of atoms like this and you can actually you know for example you can go into a particular region you can count how many items are there just by counting the pixels and so on and you can look at their fluctuations so central question here is that this quantum many-body system so what can I say about the spatial fluctuations in this system even in this non-interacting limit again I mean I want to you know I will come to that maybe later you know non-interacting fermions if you say you know most people will say that but this is a trivial problem yet you well probably if you want to calculate the spectrum but if you want to calculate the spatial correlations is highly non-trivial okay in other words the second quantization is trivial but the first quantization in quantum mechanics is not so trivial okay so so the bulk properties of these non-interacting fermions for instance okay so if you re if you're here you don't see the curvature so here if they say you know translational invariance applies there and you can actually solve this system by standard many-body theory like you know Thomas Fermi approximation and local density approximation so everything is known in the deep inside the bulk but when you go near the edge because as I said they are confined in a certain region of space if you go near the edges you know these because they're the particles are very few so the fluctuations are very important both quantum and thermal fluctuations so you you know and you cannot apply the the bulk you know translation invariant system so to describe what how to describe the correlations near the edges of this confined system cold atoms system right what can we say about the edges in fact this problem was pointed out almost twenty years thirty years back by Walter Kohn who is again a famous chemist Nobel laureate in fact he says that the uniform electron gas the traditional starting point for density based many-body theories we know what amia systems is in the appropriate near the electronic ages okay we cannot apply the many body theory you know traditional in many body theory of translational invariance system out here three fermions is because of the trap there are non-trivial correlations and fluctuations which appear at the edges so how do you describe these edges so again what I want to show you and this will be coming during the lectures is that indeed these atoms these fermions in a potent confining potential harmonic potential let's say so you can actually map these positions of the atoms in the ground state to the exactly again the eigenvalues of a Gaussian energy matrix meaning Gaussian matrix carved with with complex hermitian matrix eigenvalues basically with Gaussian entries okay so the positions of the atoms will behave like the eigenvalues of a figure of a random matrix so once again the random matrix theory has been extremely useful in understanding the properties of such such cold atom systems and this is a sort of thing team of research that we have been working for since last five years with several of my collaborators and you can find a review in a recent review and in in this on on this subject but I'll try to develop this as we go along okay so so this is the sort of main plan so what I'll do in the next lecture since I'll not have enough time today to start is is that I will start with this and I'll just first you know try to tell you that you know what are the possible N symbols of different random matrices so now I'll go in a more detailed I mean this was just overview thing and then I want to show you how to compute the Joint Distribution of eigenvalues starting from the Joint Distribution of entries and once we know that it's always not not always easy to ask and but you know some cases we can do at least for Gaussian entries you can certainly do and then you know once we know the Joint Distribution of entries then you want to calculate certain observables like for example the spacing distribution or the distribution of the largest eigenvalue and so on and so forth or the average density of states so there are many other interesting observables that you can try to compute these spectral properties of deciding values and so so this is the sort of plan for the for the and then on the way I mean I and when I when I develop this theory of how to calculate correlation functions or you know spacing distribution etc I'll just use this you know language of very basic quantum mechanics okay so hopefully you will see that there are lot of jargons that are used in the random matrix literature but you'll see this exactly like solving a simple quantum problem for instance I mean you know the method of orthogonal polynomial is nothing but just doing single particle quantum mechanics okay so so it'll be very I mean hopefully it will be very clear as we go along to the when you do the D in detail these things okay so so let me just summarize them these two days you know the overall introduction is just so basic problem is clear right so you just give me give me an N by n matrix and let's say with some symmetry such the diagonal is a yet but you can also talk about complex eigenvalues and there are physical examples which correspond to complex eigenvalues and and then you give me the Joint Distribution of the entries and the generate problem is to find the Joint Distribution Wagon values first and then once so so there are two steps so we've given the suppose you give me this I have given you an example which is the Gaussian but you can you know for example you can choose the entries to be from uniform distribution or exponential distribution whatever you like okay I all tell you more detail about different N symbols of random matrices in next lecture so given the Joint Distribution so our first task is to this is a formidable problem you know is very difficult problem and many cases the answer is not known okay so just to give you one simple example okay so so first task is to and often we'll see that even though the entries may be independent like in the in the Gaussian iid case the eigenvalues will still be correlated okay so which means that eigenvalue distribution is not going to factorize so this is the as I said no it's a very difficult problem in fact for most cases it's not known the Joint Distribution yeah so for instance I mean let me just give you one simple example okay so you know take the stick this is a network problem random network problem suppose you have a drop let's say as a simple example let's take a graph of three three nodes 1 2 3 and let's say I have some connection like this between 1 & 2 & 1 & 3 okay so this is a graph it can represent whatever I mean some company with some connections between them or you know some buyers and sellers in finance and many other things ok cities different cities so so far so any such graph which is given by a set of vertices and a set of edges connecting them you can construct is called the adjacency matrix so adjacency matrix is so a IJ is 1 if I and J are have an edge have a common edge and 0 otherwise for instance for this graph this will be a 3 by 3 matrix so 1 2 3 1 2 3 so 1 1 element ok I mean diagonal element you can put it to be 0 or sometimes some people put it to be 1 but ok let's put it 0 and then between 1 and 2 I have a connection so I put a 1 and then 1 & 3 I have a connection so I put a wand in 2 & 1 I have a connection which is 1 2 & 3 there is no connection so 0 3 & 1 there's a connection 1 and then 3 2 no connection 0 so this is a 3x3 adjusting C matrix and so what you see is the matrix is a sparse graph big R in sparse matrix because many eigen values are 0 right I mean many many connections will be 0 because there is no direct direct link between the between the nodes ok now if you say that ok I give you a random graph meaning that that these you know some links are there with some probability and with some probability they are not there ok which means that these guys are going to be now random variables so link is either there or not there so it'll be Bernoulli variables with some probability this is called the dose trainee graph and I asked you okay what are the eigenvalues what can i say about the eigenvalues of the adjacency matrix you know given the distribution of the Bernoulli distribution of the entries okay so nobody knows what is the Joint Distribution of these so again this will be a symmetric matrix so there will be n GL eigenvalues and if I ask you what is the Joint Distribution of this this is a totally open problem so there are very few cases where you can actually given the Joint Distribution of entries you can determine the Joint Distribution of eigenvalues and the Gaussian is only one of those rare examples and I'll mention for which classes of distributions you can actually determine the Joint Distribution of eigenvalues so this is the first step of this problem so some cases if you are lucky you might get an exact form for the Joint Distribution of eigenvalues but then the next step this is step two is that suppose you know this and we'll see the examples of that then from that I want to calculate several observables for example away okay so this is more like a statistical mechanics problems you know statistical mechanics problem for exercising model you give me the probability of a spin configuration some different spins and if I ask you okay can you calculate the you know two-point correlation function okay one on two dimension you can do it three dimension nobody knows how to do it even though we know the you know the you know the probability of different spin configurations we cannot calculate correlation functions right so knowing the Joint Distribution is not a guarantee that you can calculate everything explicitly so so calculate the observables so of the for example you know I mean listen largest eigenvalue the largest eigenvalue is also going to be so this is exactly the extreme hello problem that sanjeev was talking about Sanjeev was talking about when this is factorizes into individual distributions iid case right so this is the independent and then the calculation of the extrema largest one of them is maximum is very easy as I showed you directly okay yes happily do on just one integration and you are done but imagine that you don't have this but maybe you have you know okay I'll give an example okay so we'll see this and this is what I mean by correlation lambda minus lambda J square in fact if you take the Gaussian in symbol if you take our entries complex hermitian matrix we'll see later that the Joint Distribution is given by a product of gaussians times something called vandermonde square lambda I minus lambda J so every pair of eigen values they repel each other because of this presence of this factor but you see the because of the presence of this factor they are strongly correlated okay if you did not have this factor then they would uncorrelated there will be totally uncorrelated we factorize in that case you know distribution of the largest eigen value would be Gumbel fresshe or I will that Sanjeev talked about I he's going to talk about tomorrow but he already showed you the example of Gumbel this will be Gumbel distribution but because of this guy here it's not going to be gamble it will be something else's so the strong correlations actually between between different random variables make the extreme value problem very hard and in this particular case it's known and we'll see that it's something called a tracing medium distribution which is again another very famous distribution which has appeared in many many different problems so then you can ask you know this is the number variance okay or you know how many eigen values are there number statistic full counting statistics number statistic meaning that suppose I take an interval minus L 2 plus L or any interval if you like let's say a to be if the eigenvalues are real they are lying on in line right and I take an interval A to B and I asked how many eigenvalues are there ok the number so that's the number but it fluctuates from one sample to another sample and I can ask you know what can I say about the mean number of eigenvalues the variance of the number of eigenvalues or higher cumulants and so on okay so this is the it is called the full counting statistics then I mentioned already that there is a spacing distribution spacing between successive eigenvalues and there are many other such observables so so this is a more technical problem I mean you know the Joint Distribution exactly okay we'll see how to derive this at least later but knowing this to compute these things is not always easy okay and so this is this is a whole subject by itself so this is more akin to statistical mechanics problems and there's this step 1 and step 2 sometimes you know this is easier and this is more difficult sometimes you know this is you know much more difficult to do and then I mean if you don't know the Joint Distribution you can't you can't even compute many of these things and so so this is the sort of and then you know the correlations between eigenvalues local correlations local and global scaled correlation functions so these are the different observables that we want to we want to we want to compute starting from the Joint Distribution so so basically this is the two step I want to do at least I want to first you know introduce you to different n symbols that's the sort of flow chart different n symbols and then we'll see under what conditions can you determine the Joint Distribution I mean in some cases when can we when are you able to calculate the Joint Distribution and then after computing the Joint Distribution then from that what can we do to compute this different of different observables okay so that's the sort of the main main program and I'll do these and we will see that you know this is sort of also very closely related to this this quantum problem of non interacting formulas in a harmonic trap and so using just very basic methods of quantum mechanics we can do many of these computations so that is the sort of goal of the next few lectures okay so thank you very much [Applause]
Info
Channel: International Centre for Theoretical Sciences
Views: 2,786
Rating: 4.9200001 out of 5
Keywords:
Id: 5Xm3B8teyOo
Channel Id: undefined
Length: 83min 19sec (4999 seconds)
Published: Tue Jul 30 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.