TMS18.L26. Frank Pollmann. Tensor networks and matrix product states (II)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
compress it and and moreover from this then compressed matrix product state form I showed some ideas how we can bring it into the so-called canonical form that then allows us to conveniently evaluate local expectation values but also correlation functions and alike but this still leaves out the following problem and this is namely Cannavale what we are usually facing is that we might be interested in the ground state of a specific Hamiltonian and we not necessarily have a friend with a huge computer that kept do this calculation for us so so what we actually need are some tools that find in this kind of search space or in this manifold of matrix product states the or an approximation of the ground state that were interested in right so I hope that with this discussion that we had I convinced you that the class of hamiltonian set we are interested in we would find a solution in this subspace or in this this many fold of matrix product states but we don't know yet how to how to find it and and this is what I want to discuss now so so in for now I want to assume that we have Hamiltonian that is strictly short-range - it is only coupling between nearest neighboring sites examples of this Hamiltonian would be the Hobart model or Heisenberg model and Ising model and so on so and what I want to show you is now in some algorithm that can do two things so first if we actually allow us to perform time evolution in in real time so let's say that we have a Hamiltonian have a state given as a certain time and we just perform a real time evolution of a state to get us to some time T and moreover once we know how to simulate the time evolution of a state we can switch to imaginary time and from this find the ground state wave function right so if you just start from some arbitrary wave function that has some overlap with our ground state wave function then we can just like to it the imaginary time evolution operator and that will project out all excited states and leave behind just the lowest energy state so so this is now what we want to want to do so in the there are now a few steps involved so the first one is that since our Hamiltonian has this specific form we can trotter eyes it which means we can now split the terms into two parts we can now have decomposed the Hamiltonian H into sum F plus G where F are now terms that own act only on even bonds right so so here we have the F terms and we have the G terms the G terms act only on odd bonds and clearly we see that all F terms mutually commute because they don't have any terms in common and also all terms G commute because also they don't have any operators in common on different sites um however we see that G and F do not commute because for example this G and this F here they overlap here and which means that if we have maybe some X operator here and a Z operator here then they would not commute but this gives rise to is so called Suzuki Trotter decomposition because now what we can really do is like if we have our time evolution operator like with a full Hamiltonian like F plus G we can then actually approximated in in certain orders so in the first order we just write it and as x times y which basically means that this is just a product of e to the I delta T times F times the I delta T times G and then we can also go to higher borders etc and by this reduce the error that we make in delta T right so we just assume that we are just make a time evolution by a by a by a by a small step delta T and then we can perform this with some error that is a delta T squared first step here and delta T cube first step here and if we do this decomposition then we actually have have something that we already fight find quite quite easy to write down because now as I pointed out all the terms F commute which means that the time evolution operator for F can be just simply written as a product on all even bonds of the time evolution operator or for the or abundant the same for 4G right so and each of those gates or each of those terms X only locally on on two adjacent spits both sides so so what we have now boiled down like there's no boil down the problem to time evolved States to something rather simple right so now using this convenient graphical notation again we have now a matrix product state like this is a guy written up here at a given time T and we want to perform now the time evolution in this trot arised form right so so now we just have to we just have a product of all those gates F first right so so this is that's only acting on the even bonds and in this graphic notation this just amounts to having to multiply these unit areas you to do these two two adjacent matrices here here and here and so on and so forth and the next step in the next row tests are the second part of the Trotter step we we'll then actually have to do the same on the hot ones but from this kind of picture we see that all we need is actually an algorithm that actually starts from a given matrix product state at a given time applies these matrices you and compressors or kind of confirms it converts it back to this form that we had before right at the moment that we have this kind of algorithm we actually know exactly how to do the time evolution because once we arrived here we can actually does go back and then would do the update on the odd bonds and then using this trotter ization just by alternating an update between even bonds and odd bonds we can then perform the both real time and imaginary time evolution right so so the question is how do we get from this form back to our canonical MPs representation and this is done by the so called TBD algorithm that was introduced by Jeffery Vidal a couple of years ago and the idea is now it is relatively simple so we we first go to the two sides where we want to perform an update and contract all the matrices shown here to a new representation theta as given here but let me just elaborate a little bit what that means what we have done because having this form is a quite nice representation of the wave function right so recall that when we have our matrix product state of this form so we have alpha lambda now we have some index alpha here and index beta here and I here and J here and this is now our theta gamma J and this is now actually a representation of a wave function in terms of Schmidt States to the left of side I Smith States beta to the right and local states I and J here right so so basically this is now a representation of our wave function i J I far I J beta right so so basically by contracting these guys together we actually have now a representation of a wave function in a mixed basis where we use both Schmidt states and local states on this side and if we have kind of transformed our wave function into this basis then we can actually just apply our unitary gate to this to this object and contracted what will be the summing over all these indices and that leaves us now with a transformed wave function so this is now the wave function where we applied this one Trotter gate to it and by this on this bond locally we just kind of changed the entanglement on this bond and we already applied it but what we want to do now is we want actually to go back from this kind of form to this canonical form that we had before and we just now using exactly the same trick as before as we did remember performer Schmidt decomposition or a singular value decomposition of this object so we now just performing a Schmidt decomposition at this middle bond giving us now the new Schmidt values because each midfielder's actually have likely changed by this unitary and the new kind of unitary to the left and to the right and from these we can now directly extract the transformed tens of gamma B and gamma C here right so so if we just perform this we can actually perform the exact time evolution of our state but this comes at a cost and we see this in the following way let us now assume that we just initiated our state with a simple matrix product state where on each bond we have a bond dimension of ky like each of these bonds has indices from one fluke high then this blop or this object here has been a bond dimension of Chi D D and Chi like D being the physical dimension but when we perform a singular value decomposition of this object which is done here then the bond dimension here has actually increased from Chi to D times Chi right because this is a D times pi cross D times Chi dimensional matrix and if you perform a singular value decomposition then we have actually D times Chi singular values but that's a bad news because the it each time we apply a layer of these Trotter gates we actually would find that the bond dimension increases by a factor of D which means it increases exponentially and the the way to overcome this is that at this point we actually perform truncation right so so we have exactly the same picture as what we have here we just have now some Schmidt values that have increased to a certain amount and then we are saying that okay all the Schmidt values that are smaller than say 10 to the minus 10 we just neglect and we hope that we've chosen the truncation and of good enough or like I made the truncation error or small enough such as the physical observables you're interested in are not affected yes which one see the one on the left and the right exactly but but the as I showed earlier so so we have now the if this is now our our spin chain we now perform first updates on these bonds here right so so which means that then the bond dimension increases on on this particular bond from Chi to D times Chi and on this bond but on the next level like when we apply the Tata gates on say the OP bonds we will increase it one dimension here so we just increase the bond dimension alternatingly on the even and the odd bonds does this answer your question good and with this fairly simple algorithm you will also play in the afternoon and the in the tutorial session because this is actually if you use Python you can just implement these few steps and a few lines of Python and you have now having a code that allows you to both perform the real-time of illusion and imaginary time revolution in in this mmm with with matrix products it's good so so so far we now assume that the system is it's like a finite system say where we have l sites but there's a quite simple and neat way to directly apply this mass method in the thermodynamic limit and for this let's assume that if we performing if we starting from a state that is translation invariant with a particular unit unit cell so if a b a b a b p-- and we are performing exactly this kind of algorithm right so so we would first do an update here we do then an update here either then an update here and so on and if we have this translation invariant state after may be doing this update for 5 10 or 100 times you will realize that each time you're doing exactly the same thing right so each time you are starting from some two side MPs that you contracted you just apply the same unitary and you just perform the same SVD so each time you're doing exactly the same thing but then you might think that well instead of redoing this SVD over and over again you could just say that well let's assume that we have done all the a B updates in parallel and that's it so you best basically do the update once on a bond a B and in the next time and the next time step you just do an update on a 1 ba and you can forget about everything else here so you just basically store two matrices so does two - where do you store two of these rank three tens us and you just alternatively do an update on the bond a B and then on ba a B ba and then you can directly simulate the system in the thermodynamic limit just by by doing it this way good well in this case you don't have a boundary yes yes oh so it's live like this so if I just take this form here at the first update you just have here say a a b b and this is like a B right you just contract these guys get now the new gamma a new lambda a new gamma B and the lambda B have not changed right so so let me just put them after you perform the step you have then the tildes here and now you just uh and then you just do the update the other way around then you choose your gamma lambda B gamma B alum lambda B tilde gamma B tilde the right way or another I think this this is the ones that tell it is not right and then you do an update on this guy so you just really swap aibee aibee at each step and then the idea behind this is that you adjust sort of a million times doing the update in parallel on a bonds and P bonds and then you just don't don't have to worry about these boundary condition was it no it's not because if you had because it no it's not because if I'm there's actually a subtle difference so let's assume that I have my system like if I have a system with a unit cell of I don't know four sites then if I'm doing a singular value decomposition then at each step I just basically have like some orthogonal basis alpha to the left and and orthogonal basis beat her to the right if however I think of the system on a with periodic boundary conditions in a poem performer Schmidt decomposition then this system is not actually cut into two halves because the Schmidt States alpha and beta they actually would have some overlap so so so so this set up does not correspond to periodic boundary conditions but two in infinite boundary conditions if I wanted to do the same algorithm with periodic boundary conditions I could do this too but then I would have to use a different way to to update the last site I mean there is a way using swap gates but I mean this is getting a bit trickier yes yes so the the update like when when applying these these gates this is the same as at a given step the multiplication of this unitary this is as if it is were with retic boundary conditions but the fact the way that we are doing this Schmidty compositional assuming that the Schmid States this is the step that makes the difference between infinite and and periodic boundary conditions yes that's correct not that I know of em because basically you're using this Trotter decomposition into a and B bonds and that even though the physical Hamiltonian does not break the translation symmetry this kind of totalization does and there's no easy way I I'm not aware of any way how you can do this time to pair this TBD algorithm by not artificially breaking the translation symmetry there are other algorithms like the ones that I showing in a minute there actually allows you to know that you don't have to do this but for the T V D algorithm you you would have to do it good let me let me now come to some second algorithm that I briefly want to introduce and this is a so called density matrix minimization group so and the this this density matrix innovation group algorithm has a number of fun of advantages so first of all it's actually significant the TBD algorithm in in finding ground states and secondly it is not limited to hamiltonians that have just nearest neighbor interactions right because this TBD algorithm because as trotter ization explicitly assumes that the mps that the Hamiltonian has the form that it just couples nearest neighbors and this density matrix realization group method does not but it's a conceptually like slightly more complicated although closely related sundar for understanding this kind of formulation of the density matrix humilation group method actually I need to introduce one other concept and this is the so called concept of matrix product operators so for we after hearing about them for several times now you're already very familiar with the concept of expanding states in terms of matrix product states where we get these kind of expressions but now assume that you have an operator acting on that Hilbert space you can actually expand it in terms of matrix product operators which means you actually have now just one more leg basically sticking out of these guys and these are now everybody in it is conceptually any operator acting on your Hilbert space can be expanded in a product of those matrix product operators and once you have this matrix for an operator you can then apply it to a given state and the way that you are applying it to a state is that you're just contracting all these lines in in the way that we discussed them early on right so so so so this could be Sai and this could be the Hamiltonian and H times sy is now something that you get by just contracting all those matrices together and and you are done well similar we can now calculate expectation values of operators wait previously we discussed in detail how to calculate expectation values of operators that's for local operators but we can also have operators that are not so local like operators at couple various sites and again we can just write it in this kind of sandwich now so so this is Sai we apply H to it and then we just have the conjugate of off site good and the question is how can we find these matrix operators and let's want to demonstrate it for for a concrete model let us say that we have some generic operator that is coupling say nearest neighbor sites as the one here but it could be whatever I like what this actually means like if we have an operator like this then it actually means in a in in this kind of compact notation that we have some operator a acting on the first bin with tens or dot with some operator acting on site pea tendril with identity everywhere else plus identity on the first side a be identity on anything else and so on and so forth so this is actually what we want to express now and without committing too much on this I want to say that there's been a very compact way of expressing this in terms of so-called matrix product operators and for those that you are interested in this I can actually tell more about it during the lunch break but the idea is that we can construct so-called finite state machines that construct all the terms in a given Hamiltonian the finite state machines are known from engineering if you have maybe some set up how you want to operate some machines you're there in certain states they're doing something and then there's some event happening they're doing something else and they're going through a certain cycle and this is like the same that you can construct Hamiltonians you could say that well be first in a so called ready state and always put identities until you go to the kind of intermediate state by putting some a operator once you have put the a operator you know all you can do is to put a B operator and go into the final state and a final state you keep putting identities right so this is based basically a sort of finite state machine for constructing these hamiltonians and without much explanation either state that every finite state machine can be written in this kind of matrix form and then we actually have now a very compact way how we can write down many of the Hamiltonian that we might be interested in so that's good because now we can write down an algorithm that actually variational e minimizes the energy right and for this is I just wrote you that we are very familiar with matrix product state so we just guess whatever ground state we might have by despite being some guess in terms of the matrices a we now know how to express our Hamiltonian in terms of the this matrix operator and we just sandwich it with the conjugate of this wave function and this is now the energy expectation value and now we can basically just minimize this so we can just just basically take out one of the matrices and this gives us now assuming this canonical form a representation of the Hamiltonian in terms of our orthogonal States and then we can use some standard method like lunches or Jacobi Davidson to actually minimize or to find the ground state of this effective Hamiltonian projected into this space of Schmidt states and again if you have this sort of variational minimization of the energy then it's actually much faster than than the TBD algorithm and actually allows for for long-range interactions and if we have this particular form it also allows us to have real translation invariance I mean to the question early on because here we don't have to explicitly break the symmetry and the algorithm that we then derived from this is conceptually quite similar to what we had before so we contract all the matrices around say two sides or one side depending on how we want to do the update then we have now our wave function given in a mixed mixed representation right so this is exactly the form that we had and this DVD I grew them now we minimize the Hamiltonian so so this is now the effective Hamiltonian we just use some standard tools for finding the ground state of this effective Hamiltonian and now we have some some new kind of updated wave function which again we bring back to the canonical form and we just start over so then we move to the next site or bond depending on how we do the update right so so now this is an algorithm similar to to what we had before where base if the bond dimension increased so we would have to do some truncation at this step again so this gives us the geometry algorithm so let me now give some some comments on these algorithms so first of all the algorithms that I basically sketched here can when being implemented we make much faster by using different conserved quantities so for example one if you have like a Hamiltonian that conserves the particle number or conserves a total magnetization this can be explicitly conserved in the construction and by this we can make these algorithms significantly faster well one one trick and is if we actually have the if you just using the if you're using this algorithm directly in the thermodynamic limit then we actually can use this transfer matrix that we have encountered before like when we were constructing this canonical form then we know that the dominant eigen vector of this transfer matrix is the identity because of the canonical form because of the Schmid states are orthogonal but the second largest eigenvalue is directly related to the longest correlation lengths in the system so so this is a quite handy tool if we for example want to look at critical points where we know that the collation names will actually diverge and this gives us one way as I said and this is a slight way I want to just take a little bit time to explain it we could now say about the following let's say that we have now some Hamiltonian in this case for example the the transverse field iving model and then we know that for small transverse field we are in a depth in a gapped phase where we have a ferromagnet and for large G we have for example in a gapped phase where we have like a paramagnet and and now we recall that I was pointing out that the algorithm and these ideas work in gapped phases but in between there's a critical point where the system is get less so we we we actually cannot write down the ground state exactly as an or we cannot write it in a very good efficient way in terms of a matrix product state and this is now sketched here so so let us now assume so if you plot here the correlation length of a system as function of the transverse field and now at the critical point the correlation length will diverge so we have like an infinite collation length but let us now obtain the correlation length from the matrix product state from this matrix product state algorithms so what we will find is the following if we are staying far away from the critical point we will actually recover practically the exact state because the truncation that we perform is is very small and we just get exactly the correct correlation links and the same is true if we just a and even the other phase but as we approach the critical point the correlation length is getting longer and thus we actually would need a larger and larger bond dimension or more and more states to accurately describe the state and this is now shown here as we if we have like a small one dimension very quickly like maybe here the correlation length that the optimized MPs has is actually different from the actual physical correlation length and if you increase the bond dimension we actually get a better approximation and and in fact if we just look at how the scales we can actually learn something about the universality of the of this critical point good so so this concludes the first part of what I wanted to talk about which was sort of an overview of how entanglement like how specific entanglement properties of these one-dimensional and kept local hamiltonians can be used to to efficiently show us that we can efficiently simulate these states these systems using matrix product states and I discussed two algorithms namely the time-dependent block decimation and the density matrix romanization group has two algorithms that allow us to work in this manifold of matrix back States to find good approximations of ground states but I guess I will get started on the second part after the coffee break but maybe that leaves us some time for questions that you might have well I mean how large in terms of the bond dimension but the physical size if it's infinite it's infinite right so if you just use this translation variant I mean it's just the stresses again I mean if you have a translation event system as here you only would need to store to mate with like two two two two tenses so in that because you just know that this is ms translation variance that's all that you need to describe the infinite system and the only kind of question that remains is like how how large of the bond dimension can you simulate on on a system no no no de mogi you can also apply the same idea that I gave before about using translation invariant state you can also use in the MPS setup so so that works both for TBD and since it's it's exactly the same idea so you're just when you're doing these updates here you would just have instead of having ABC here you would actually just have lambda B and just replace it yes well I I mean good god no no the other thing is that I mean III I realized that this was too fast to kind of understand all the details I just want to give you the the main idea and the main idea is really that that instead of doing an imaginary time evolution to be treated for TBD the the main idea is it actually variational II minimize the energy you say that well you actually know how to calculate the energy for a given matrix product state and now you have some ways to to minimize this energy so so this is the main idea and then I just sketched very quickly the the point of how to actually do this minimization but but before implementing it you might want to read some some lecture notes actually I further I mean coming to this later on but but there are some some some lecture notes that I will refer to yes so there's a proof if you if you prove that if if you have states where there's an area law for so-called Remy and trapeze with an index and of smaller than one then there's a proof that those states can be represented efficiently in terms of MPs that's one more thing I mean how to implement these charge degrees of freedom yes up right so so here the idea is the following I mean it's maybe also easiest explained if it has to a Schmidt decomposition say so so if you well if if you if you perform let me just write or even like if you just have the state but if you have like a state this we find on I won through I L and now you just do a by partition of a system into some parts and now you say that the total number of charges is conserved right so so so then you can have a quantum number and left and the quantum number and right and and you know at every given time that n left plus and right is equal to your particle number which is conserved which tells you that in this representation of the wave function if you have now I know some basis I here and their biases J here and you just write down the wave functions in I and J that this would be block diagonal right because you have only nonzero contributions in this wave function whenever like n left i + n right j is equal to say the total particle number and and this block structure is something that you can then use for for accelerating these codes to have symmetries right I mean the way that you can improve it is by basically reducing the kind of the the the space in which you have to do the optimization right so if you I mean this is I basically just pointing down to this but I answer to the previous question that if you have certain symmetries so for example particle number conservation you know that a wave function in this form would have a block that will be block diagonal so you when you perform any contractions of these tensors then you actually know that you that there will be zeros here and here so you just don't have to iterate over those so so that makes it faster
Info
Channel: Topological Matter School
Views: 838
Rating: 4.1999998 out of 5
Keywords: DMRG, Matrix Product States, topological matter
Id: MphxufOslgw
Channel Id: undefined
Length: 42min 9sec (2529 seconds)
Published: Wed Oct 03 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.