Norbert Schuch: Matrix product states and tensor networks (I)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
good morning everyone and welcome back our speaker or tutorial speaker this morning is Norbert chuck from the Max Planck Institute in Garr King I mean he's gonna tell us about matrix product states and tensor networks all right thanks a lot well it's a great pleasure to be here thanks to the organizers for inviting me so I will try to give an overview of the field of matrix product States and tensor network states which is something which kind of came out of the field of quantum information and it's being used to study complex quantum systems found in various physical scenarios well overall I would encourage you to ask questions now it's a big audience or use a bit of caution all right if everyone asks one question during the lecture we're probably in trouble but generally feel free to to ask something if you feel it's a it's a good point to ask that all right so what is kind of the motivation for looking at these kind of things well the origin is studying complex quantum systems quantum systems consisting of many constituents and of course in the most general sense these kind of systems are all around us right I'm all matter is built from lots of particles and they are all quantum but of course many of them show very classical behavior like this desk here however it turns out that if you go to more extreme conditions we go to low temperatures for instance two very well isolated systems these systems can exhibit quantum behavior and we all know that right we can build quantum bits which exhibit quantum behavior but it happens that even complex quantum systems consisting of many many particles can still exhibit behavior which is which is very quantum in some way and this happens in in various areas of physics so in in condensed matter we can go to low temperatures and the heat soak are topologically ordered materials I will talk about them later actually if we study molecules it turns out most of the molecules are the electrons are taking a very complicated quantum state it's very hard to understand these systems and high energy physics has some of these problems which are even a bit more intricate because as a particle number is not preserved we can have more and more particles in a complicated quantum state so on the one in these systems are extremely exciting because they exhibit new physics right which we don't see in everyday life in normal materials and which could be interesting for for lots of new tasks like quantum computing high precision measurements on the other hand they're very hard to study because they have this complex entanglement and the idea is to kind of look at these systems from a point of view of quantum information because there we have thought about entanglement for a long time we have developed all kind of tools to characterize entanglement to think about entanglement as something which is a resource which we can use to do things to teleport to communicate or do cryptography so we have kind of a very developed of perspective on entanglement and the idea is to get these things together and to study these quantum systems which exhibit strong quantum correlations strong non-trivial correlations from an angle of quantum information of entanglement theory and user tools we have developed there to gain a better insight into these systems alright so well the overview of the talk would be that I start by introducing quantum many-body systems and talking about their entanglement and trying to explain why their entanglement is special and then I will use this perspective to to motivate to to motivate them the answers of matrix product citizens of tensor network states which can be used to describe these systems and then show in a number of scenarios how this can be useful how this can can give us a new perspective on these kind of systems so let me start by talking about entanglement structure of quantum many-body States and first of all we should say what we mean by quantum many-body States and there there's really a very wide range of quantum many-body systems and it really matters from which perspective you look right probably occurrence matter physicists will say that a many-body system consists of of nuclei I was probably the inner electrons and some outer electrons which binds these things together if you go to higher energy scales you will say over the nucleus consists of protons and neutrons if you go to even higher energy scales there will be quarks and gluons so it really depends on which scale you look right maybe a quantum information scientist so the world consists of qubits so it really depends on your perspective and what we will look at because they kind of capture the essence or many of the interesting features of these systems but they also avoid some of the complications of say some continuous space or fermionic particles if we look at spin systems or if you wish from a quantum information point of view cube bits or qubits and these spins these qubits sit on some kind of lattice and I will usually draw a square lattice now that's that's mostly because it's convenient across square lattices I'm pretty tricky to draw some other lettuces but it could be some other lattice but it's really important it should be some kind of irregular structure and it should have some kind of notion of a dimension it could be a one-dimensional chain we'll start with one-dimensional chains it could be a two-dimensional lattice we could go to two higher dimensions and then the question why do we need this lattice so what's the relevance of this lattice the relevance of this lattice comes from the fact that the system is described by some way in which the particles interact which is given by some Hamiltonian here so we have some some operator which is a sum of terms and each of these terms only couples say two spins or maybe a small number of spins in some a local fashion there's some notion of locality in these systems and well if we study large systems typically we will demand that these systems have some translational symmetry because we can't just specify a different Hamiltonian term everywhere where we could but it probably wouldn't make so much sense but we could also break the translation invariance by putting say different Hamiltonian terms in different places in space they could vary in space or things like that so it doesn't have to be translation invariant I will switch occasionally between translation invariant and non translation invariant systems in case it's only when I'm talking about a specific time ask me if that's translation invariant or not now these spin systems you might say they're a bit artificial because well materials consist of electrons usually and things like that but it turns out they're actually many different ways to to obtain systems like that both in nature and in fact also in the lab so for instance they're materials which basically form some crystal that is there are very localized electrons they don't move so we really have spin sitting at fixed positions in space but they will interact via some quantum mechanical mechanism some exchange interaction or something like that but we could for instance also make optical lattice this was counter-propagating laser beams to put trap atoms there and then you say two isolated levels of these atoms of each of these atoms to simulate something a system of spin 1/2 particles of qubits on a square lattice and Italy tune them and look at their physics and well usually in in reality like in the case of electrons these guys have magnetic moments so that's really also interesting to describe the magnetic nature of materials now what I should also say what are we interested in from our physical point of view in these systems we're interested in the the physics at low temperatures basically and well one of these is of course that that we're looking for systems which show unconventional very quantum behavior very intended behavior now we know as we ramp up the temperature the state of our system is getting more and more mixed and we know that if the state is getting more and more mixed usually it's getting less and less entangled so if we expect the system to behave in a very non classical way in a way which relates to its entanglement we will most likely expect to see this behavior at low temperatures so we look at low temperatures and by the lowest temperature state as a ground state it's a state which has which is the eigenstate was a smallest eigenvalue to this Hamiltonian it also turns out that actually lots of the interesting physical properties even for states above the ground at low temperature are already encoded in features of the ground state so in the end a lot of features are not necessarily connected with a Hamiltonian directly it's sufficient to know things about the ground state and in this talk I really focused on that ground state and in fact I will not talk about Hamiltonian so much so safer computer scientists think-think hamiltonians are scary adjusting of quantum states for most of the talk states will be fine ok now so we have this quantum system sitting on some some lattice some 1d 2d lattice and we would like to describe their physics and well if you look what is done in most areas of condensed matter physics traditional condensed matter physics is to neglect the entanglement in the quantum state and the point is dispersed amazingly well right I mean really most of condensed matter is working like that and it it's not so surprising maybe because at normal say room temperature systems will not exhibit much entanglement exactly because in the highly mixed state there is not much entanglement present so it turns out a lot of the physics of these kind of systems in many many cases can be understood by neglecting the entanglement and basic is starting from an analyst which just assumes they're in a product state they don't talk to each other at all so and this is this is known as mean field theory and if you look at that what happens is you see you can fully understand what's going on by looking at the interaction between two spins saying each spin is in the product state so you just have to take the expectation value of a local term in the Hamiltonian in that product state and this will tell you everything right it's really sufficient to know one of these guys and then you get very simple expressions for the energy say of the system and so on and one way to understand why a mean field approach is good in many cases well one would be to talk about thermal States but it turns out even at low temperatures in say three dimensional systems like most systems are this works very well and one reason from a koan information point of view is to say that well if we're in three dimensions on a cubic lattice each didn't have six neighbors right in each direction too and it should be equally entangled if it's a symmetric system with all of these six spins now we know if we share entanglement equally as many parties there cannot be much entanglement right that's a step of monogamy of entanglement so monogamy indeed tells us that the higher our spatial dimension is the lessen tenement we expect and indeed it turns out these exotic phases phases which don't fit in this picture tend to show up mostly in lower dimension say in two dimensions or even one dimension so this is really how most most matter can be described basically all we need to know is the behavior of a single spin and then we can completely from this local property describe this is the behavior of the global system which also tells us because by looking at a single spin we can say everything about the system we don't expect weird behavior if we put the system on a sphere we close our boundaries we put it on a torus nothing nothing funny should go on right because it's it's it's all what is going on is determine by a local property that global property should not matter because the Hamiltonian only sees local properties and the state is also only a state with local properties there's nothing global going on here now that's not what we're interested what we are interested are systems which don't fall in framework and these are usually called topological phases or topologically ordered systems and they have a number of distinct features and the one we are particularly interested in is first of all if we take such a Hamiltonian and we put the system say on a torus the ground state will exhibit some degeneracy now other systems as well but the funny thing is if we take the same Hamiltonian and we wrap it on a sphere it will not exhibit a degeneracy so the system will exhibit a different number of possible states depending on global properties a topology of our system or it might actually also depend on number of punctures we put in a plane if you don't put it on a closed surface but the surface with holes or things like that and if you think about what they said in the last that is completely incompatible with such a description because this description cannot care about how you close your boundary it's it's a local that the behavior of the content is fully characterized in a local way so this cannot fit in this framework and the other exotic thing is that these systems exhibit very strange excitations usually if if all spins are unentangled I can just flip a spin and this will be a state which is not the ground anymore it's an excited state it's a completely local object these these kind of systems can exhibit excitations which come in pairs and once we move them around each other they behave very strange they can exhibit the face like fermions would although it's a spin system right there's no phase and spin systems they could even behave in a non abelian way they could basically moving them around each other could would correspond what we describe as some kind of matrix multiplication that's very unconventional and well one example you might know is qatal story code which has been proposed as a memory so these these kind of systems have I mean one reason why they're interesting well one is of course they have very exotic physics so from a condensed matter physics point of view they're very exciting but actually also from a quantum information point of view they're very exciting because for instance this degeneracy of the ground space has been proposed as a way of storing information in this degenerate ground space and you can again see because these ground states are not characterized not distinguished by a local property right because then they would fall in this mean fit description they should also be protected against any kind of noise and these anions you move them around each other if this corresponds to some matrix multiplication we could use this matrix multiplication as a universal gate set for quantum computation and there is indeed there indeed schemes for doing quantum computation by braiding these excitations from each other and kind of the important thing to see that well this is not possible with a mean field and that so really what is going on is that the system is exhibiting some some non-trivial global entanglement pattern that's the only way how we can have such phenomena so if you want to understand to describe these systems we need to think about the structure of their entanglement we need to think how the integrant in these states is built up that it can form these global patterns and well that's kind of record of information end of the stage so what can we say about the entanglement of these kind of systems or more generally about quantum many-body systems well of course one point about a mean field theory and that is that a simple answer so that you can just write on one state you can write it on paper and then you can just have a very big system very very easily just by writing down a single state so we could of course not wear these systems try to describe their state so we have one of these quantum many-body systems and we want to describe its state so what can we do where we can just expand the state in some basis some local basis of the N spins in the system but the problem is now that well this coefficient will be very we have many many many parameters right I mean you should think that if we talk about a condensed matter systems these are kind of macroscopic mesoscopic systems right so an it should be like millions or even more right 10 to the 23 yeah 10 to the 7 or so that would be a reasonable scale so these systems generally are very big they will have thousands of particles for the very least so of course this number completely explodes right we know that the dimension of a many-body hilbert space grows exponentially just as in quantum computing or so so we have this exponentially large Hilbert space this might not look exponentially large but it is so so we have no way of succinctly describing this state right if you want to understand what the system is doing this will be very complicated unlike a mean field and that which is simple description we have no such description whatsoever here so it seems like the situation is hopeless once we want to start to also describe entangled systems in such a language but of course if we think about it for a while from the point of view of information if you wish of complexity we know that we're looking for ground states of some local interaction some local Hamiltonian so we have this Hamiltonian the only couple say nearest neighbor sides so for each spin there's one term in horizontal and one in vertical direction each has a constant number of parameters so actually this state all of these states we are interested in are specified by a very small number of parameters something would only only scales linearly with the system size so this tells us there should be some kind of small corner in our hilbert space which some people have termed the physical corner of hilbert space in which the states we're actually interested in will live now it's not really a corner right at the cartoon picture it's probably a very strange sub manifold whatever but anyway the main message is that in this huge favorite space most of the states we don't care about right in from an information point of view we can't even reach them with a quantum computer in finite time so we're really only interested in a very small subset but the problem is computed we could of course describe the system by the Hamiltonian but computing properties from the Hamiltonian is a challenging task so we better do this we would like to do it in a way which gives us more direct access to the properties of the system which are usually very hard to extract from the Hamiltonian itself and that's kind of what we're looking for we're looking for a nice way to kind of get a succinctly of the states in this physical corner in maybe a similar way as mean field theory which from a little local description we know mean field theory won't do because we have these entangled phases but maybe that would be kind of a starting point we would like to keep some of the locality structure because it's also finding the interactions right and that's what makes me feel so powerful so in order to understand that we should look at how the entanglement in these systems is structured so well how should we measure the entanglement well I guess all of you know that right if we take some some big quantum system we can cut it into two pieces and I will call the left half a and right half B and then across this cut we can do a Schmidt decomposition so we can decompose this into this orthonormal basis for left and right with some Schmidt weights PK here and the entanglement basically relates to the fact how fast these coefficients decay that is the distribution is flat we have a maximally entangled state if they decay very rapidly the state is not so entangled so it's basically entanglement relatedly disorder in the Schmidt coefficients and the canonical way to measure that is the integrant entropy for some operational reasons as we know in quantum information so what we do is we basically compute the entropy of this distribution PK here which is the same as the entropy of the reduced state of one of the two halves and that's what we used to quantify entanglement but in the end it's not so important at least that's my point of view for the connemara applications it's not so important to use phenomena entropy the important point is the entropy tells you something about the disorder in the Schmidt coefficients and that's kind of the important thing so let's look at the point of many-body system and ask what can we say about the entanglement in this system and well let's cut out some region say some square shaped region and look at the reduced state of that and compute the entropy and well we can compute this but this number alone is maybe not so meaningful so what we could do is we could take our region make it bigger and bigger and look at the entanglement of that region with the rest of the system and if you do that what people find is that all these systems have an entanglement which scales like the boundary of that region so the entanglement of this region with the outside if you if you think from a quantum information point of view and you say let's take a generic a typically random quantum state and let's ask how much is one part of the state and Tangra the other one it will usually be almost maximally entangled right the you you basically this region for any random state would be maximally entangled with the outside so the intent would be proportional to the volume however what happens for ground states of local interactions so for quantum many-body systems is that this entanglement is much much much smaller it actually escapes like the boundary of that region not like the wall and this is known as the area law area because it's in the three dimensions this would be the surface area of a three dimensional volume so here's more the boundary area the boundary length and again this is a very special thing right this is something which you will not expect for random States it tells you that ground states of local Hamiltonians are very very special we knew that because I'm at one has few parameters but now we know they're very special entanglement sense I should say this is actually only really true for Hamiltonians which have a spectral gap if there's no spectral gap that can be a small deviation but still it's case roughly like the boundary it doesn't grow much faster so it will still be a very very special state I don't want to talk about the spectral gap it's kind of important for the stability of systems if excitations of a system are kind of have a certain minimal energy one would expect the system is more stable we won't really need this here so it's just as a remark so intuitively what does it tell us that the entanglement goes like the boundary well if we were to make a cartoon picture of our system we would say okay we have cut a region out and kind of we would expect that this means that the entanglement in our system is distributed locally it's distributed close by to the boundary so we only expect that the agrees of freedom close to the boundary are entangled with each other while everything on the inside is not entangled with anything on the outside that's certainly the most plausible explanation for having such a scaling law right and not entangling funny degrees of freedom here and there in such a way as to get this law so what we see that entanglement should be distributed only around the boundaries of a region and from the point of view of interactions this is maybe not so surprising because if you think that interactions in your Hamiltonian if you lower the temperature they tend to entangle the degrees of heat on which they act the interaction is local so of course we would expect that the entanglement is indeed built up locally by the Hamiltonian basically the Hamiltonian force this local degrees of freedom to be entangled and everything where there's no interaction term coupling these guys it might be entangled but it's not entangled because there's a direct reason it might only be intended for indirect reasons like that the overall States more favorable so it is indeed very plausible to expect this behavior so I should say this is proven one dimension for gap systems in two dimensions and so on it's automatically established but we don't have a proof that certainly one one of the open challenges in the field to prove this ok so now let's start and try to write down some unset' switch captures this entanglement structure and I will start with one with one dimension because in well and one dimension things are simpler to draw simple to motivate and we can also prove some more results so what we will do is we would like to describe a chain of spins in a way where the entanglement is built up locally and what we do to this end is we think of each spin of being built up of two sub spins so each of these systems has a left sub system and a right sub system so that's a cartoon picture I can the end will do some more things so bear with me so we split the system in the next step we say let's entangle these guys let's the tank of a riot sub system of that spin with a left sub system of this spin and we put in the state Omega D which is some maximally entangled state of some dimension D so we take this maximally entangled state of dimension D where we call this neither bond I mentioned in these guys we call bonds and we put these auxiliary entangle particles here between these guys now now this state is actually kind of some I mean this state has a couple of nice properties right for instance if we cut any region out of that state the entanglement of that region with the outside is only given by the bonds we cut and we only cut bonds at the boundary so they satisfy this boundary law this area law by construction right on the other hand they also have some undesirable properties for instance this pin here and this pin here are completely uncorrelated right the only spins which are correlated our nearest neighbors because they share some intent there so that's very unphysical right we want systems where we have some correlations also between sides which are at some distance the correlations will decay but they won't be zero immediately so we do a second thing and what we do is that out of these degrees of freedom here we filter some effective degrees of freedom so what we do is we apply some linear map at each side so K is a site label and this linear map takes is to capital D dimensional systems here and maps out some actually interesting irrelevant in your freedom and that's actually the system we do describe so this is really only an auxiliary construction using some extra degrees of freedoms and virtual or entanglement acusa freedom to construct an actual physical expansion of a smaller dimension small theme out of system of a bigger dimension capital D and this map is sometimes termed projector I might use this word but it doesn't have to be a projector any linear map is fine in fact so if you want to write a formula you can say that sy the final state is well p-1 times p-2 times PN applied to Omega D tensor n but of course you should see that this tensor product is the different partition than this tensor product right just as depicted here now a few things well as I said this guy these guys satisfy this area law by construction and we also know that these linear maps here cannot increase a Schmidt rank and the states were already maximally entangled so we see that these linear maps cannot in easy entanglement either we see that this is actually an efficient class of states in the sense that once we fix this dimension capital V the number of parameters we need while each of these piece is specified by a fixed number of parameters which is well quadratic in this number capital D and we have a linear number of them so the number of parameters we need only grows linearly so it's a moderately growing family of states and we can also make D larger and larger to describe more states we could for instance if we increase D we could get all states we had before just by completely neglecting the extra dimension we added right our map P could just project back to the original entangled state with a smaller dimension so we actually get a growing and growing family of states which could describe more and more states and I will convince you later that first of all if we keep increasing D we can describe any state in that way really any state we want we get this full Hilbert space but of course then it's exponentially expensive that's not desirable I will also convince you while I will try to convince you that if our states have an area law so if their entanglement is small then we can actually describe those states with a moderate dimension D here so we can do this efficiently now before I get there I would like to play a bit with this description and try to see well what does it mean in terms of formulas can we find a more direct expression for the thing this is kind of the quantum information perspective right it talks about entangled States applying some map to them we could also try to write a formula where we try to expand this guy in some basis some conventional computational basis so let's try to do that so if you want to expand this in a basis we should start one by one no we should start with two sides maybe see what happens we attach a third setting in inductively we work our way to a full expression so let's start with two sides I label them ABCD here and this map P we can just expand in some basis right it takes two systems alpha and beta here and it maps it to a system I so this is described by these three index coefficient a alpha beta I and s just lame is at which side we are so what we need to do apologize it's less scary than it looks and that's probably the longest formula in the whole talk so what we do is we put one entangled state here in the middle and we apply P one on the Left P two on the right so you see that's exactly putting these things that's an entangled state a maximally entangled state i omit normalization that's the first guy here why the second for the second side and that's the same linear map for the first side so I just put these three maps there well sorry two maps one state and now well we should see what happens so what happens well we have these guys on B see here we have guys on BC here so cat and bra will meet and go away so how do they meet well this guy meets with the betta here the second K with the with the gamma so they act like a delta function right I mean they enforce the two guys to be equal and we're summing over it so in the end it just means that this index ultimately has to be equal to this index putting a maximally entangled state just means that this degree of freedom and this degree of freedom must be identical that's what a maximally entangled state does so well it means that well all we have to do is we have to take these ace here together and well this betta index and the gamma index are equal so I put the same index here so if you stare at this for a second you see that just mud if I think of these guys as matrices with matrix indices alpha and beta and beta and Delta it's just matrix multiplication so I've been just multiplying these matrices here describing what happens at the first set in the second side so I just got a matrix product here and that's it so what I have is that these two guys with a maximally entangled state are very resemblance of this guy they almost do the same but now they take two systems the very left system at the boundary and the very right and they map it to two physical systems well it's obvious if you look at the picture but this is what the formula tells you and it tells that the new me the new map is described by a matrix or a set of matrices just obtained by multiplying that matrices you started with and now you can keep iterating here the next guy in our next day the next guy and you keep multiplying matrices so in the end it turns out that the whole state can be written as a product of matrices so to each physical spin each value of each physical spin that corresponds a matrix right the first side has a matrix which is labeled by the value of the first spin the second one labeled by the value of the second spin and so on and all I have to do I have to multiply this matrices put that phrase and the trace is missing it's embarrassing because I took the slide from somewhere else and it was written trace over it which meant that I had forgotten to put the trace in the talk before and for some reason it's gone again anyway so there should be a trace so we multiply these matrices we put a trace and this number here gives us a weight of the corresponding a basis state and that's why these states are also called matrix product states because a weight of each configuration is given by a product of matrices but a matrix depends on the value of the physical spin we could do this also for open boundaries I use periodic boundaries because it allows us to make things translation invariant later we could use open boundaries and we should terminate the very left on the very rights but in some way now for these things as a graphical calculus and because we don't like writing these long formulas we use a graphical calculus and well the graphical calculus well this was also a graphical calculus so it's just a different classical calculus so for this different graphical characters we say well this a which which describes this map here is something which has three in these C's it has one index corresponding with a left degree of freedom one to the right and one to the physical so we draw this guy's a box with three legs right so for a site s I have a three index tensor a s with indices alpha beta and I which exactly through this formula describes this linear map so in the end this matrix just easily in your map if you wish mapping the left and right index to the top index like that now the second part of this graphical calculus is to say well first of all we did no tensors with say three indices by a box with three legs the second is if we do matrix multiplication so if we just identify and sum over an index if you contract and index then what we do graphically is we connect the corresponding leg so connecting legs means identifying that index and summing over it so for instance in this summation like the one we had earlier we have to connect the two corresponding boxes here and that corresponds to multiplying these two matrices specified by I and J so for a matrix product State it means we have this trace of a i1 i2 and so on the corresponding graphical picture corresponds to putting the a 110 so here the a two tensor here a three here we have open indices correspond to the value of the physical spin and then we just have this matrix product here in a trace which just means we connect these guys and build a loop and you kind of see this very much resembles a picture had in the beginning with these guys when we had entangled States and linear maps right it's just a different way of writing these pictures connected lines are like entangled states and then the boxes are linear maps now if you look at this guy this basically just describes a huge tensor with n indices I want to I N in a way by building it from small x verses by decomposing it into a tensor Network and that's the idea right we can take this whole matrix product state we no that's a general coefficient here is exactly given by this trace of a product of matrices that's what I told you in the last slide so what we have that any quantum state indeed can be expanded in some some product basis and this expansion coefficient see here is this exponentially big tensor or a vector right it lives in this exponentially became word space and our idea here is to say okay let's take the Stan sir and decompose it by writing is a network of smaller tensors right as taking smaller tensors with more indices and summing over a subset of these indices right which is by contracting these lines by closing these lines and that's why these kind of states are also called tensor Network states all right questions ok so now I would like to convince you that actually this class of states is kind of a useful class of states as I said that it allows you to first of all describe any state you want and principle that's a good thing and second that it allows us to describe the states we're actually interested in in an efficient way by using few parameters so what what happens if you want to describe a general state we have a general state we have the expansion coefficient here we have some expansion coefficient here we want to see if how we can write this as a matrix product state or short MPs well this general coefficient C is just a tensor with many indices and now I draw them in this slightly funny way but well it's still a box with many indices right and I kind of distributed them along this chain that's a general tensor and the question is can i decompose this tensor into a tensor network which has this one dimensional structure as before right and it turns out that I can do that and it's actually very simple I kind of just well declare the middle tensor to be 0 original C and all the other tensors I declare to be a very simple object this tensor for instance will take this index here and map it here this index here and map it here this index here and map it here and so on so you see all of these tensors here they're multi index tents of now all of these tensors is just a product Delta functions that they identify specific indices which with each other just in the way as its depicted and well obviously the still describes the same see right I just put some boxes on top of these lines if you wish but now it has some it has some one dimensional structure and in a second step well in a second step what I can do is I can say well let's take these indices here and not think of it as two separate indices but as a big joint index which has a bigger dimension so this has dimension small D if these the number of physical levels here I have two indices so here I have a total dimension D square here F D cube so now I can block these indices and think of them as one big super index so what I get is that indeed get a matrix product state where the bond I mentioned starts out at D it becomes d squared D cube and so on it grows towards the middle right it gets bigger and bigger kind of try try to illustrate this by this thicker and thicker lines and you see that this way of course you multiply the dimension with D as you go towards the middle so you get to a dimension which is up to D to the N over 2 if n is the length of my chain so of course that's exponentially big in the length of the chain that's not what we ultimately want but this is really about asking can we describe any state in that way and obviously if you want to describe any state we need an exponential above parameters that we cannot say that rameters there's just no way so this dimension indeed must be exponentially big there's no way around it now if you think in a quantum information way actually in terms of these entangled states and applying some map one way to interpret this is to say you start by giving them you think of like Alice Bob Charlie and so on each sitting on one side and say Alice it's in the middle and Alice is given well or Ellis creates the initial States I and she uses his entanglement to actually just teleport 1/2 to the left and 1/2 to the right and well she keeps one part of it and the next part is that Bob 1 and Bob 2 on the left on the right is the same they will teleport and keep teleporting so you can interpret this in the quantum information way as teleportation or actually post selected teleportation through these measurements now that's kind of nice that we can describe all states but you would like to see that we can efficiently describe the states which have a low amount of entanglement and that was a Schmidt coefficients decay rapidly if you wish so now well as we've seen we can write any state in principle in this form just these dimensions become very very big towards the middle so we would like to see if we can press this dimension down without making a big error that's kind of the goal so to this end let's just look at some cut here somewhere in our chain and do a shemitah composition so we have alpha K on the left and betta k on the right we do a Schmitt decomposition we get C Schmidt coefficients and where the summation index K is of course directly related to the index sitting on this big big big bond here between the left and right part right that's exactly what correlates these things that the only reason why there is a sum between the left indices and the right indices is because there's a bond connecting them which is a summation index so this summation index series up to a basis transformation exactly this summation index here so now we can look at the Schmidt coefficients and we can say well let's try to keep the d largest ones if they decay rapidly this will probably be a small mistake right we make this way just by throwing away the small ones and because it's the same index it means there must exist a basis in which this is just diagonal basically so we can write on a projection here which projects out all the small Schmidt coefficients which only keeps the D largest ones I mean you can do the math I don't want to do the math but that's the idea at the same index it's only about a basis trust pretty much so what happens is that this way first of all in this cut you get a dimension which is only capital D and the error you make is basically the weight in the tail of your distribution of Smith coefficients if you do it the right way so if they decay rapidly this error will be small and now you can keep doing this everywhere and if you do it well it turns out that the error only nicely adds up and you can all relate it to the original Schmidt coefficient so you don't have to know how they change when you cut so if the sweet coefficients decay rapidly which pretty much means that the entropy is bounded it's actually not exactly the phenomen entropy you have to do some extra work to relate it to the entropy but the really important concept is actually the decay of aid of the Schmidt coefficients or how this error scales the error in a tail so it turns out if the if you have an area law then the total error you make this way scales nicely which pretty much means it's case like a polynomial in the length of your chain actually linearly and the error goes down polynomially with your Bondi mansion so if you if you want a small error you only have to increase your bond dimension polynomially and not exponentially in both the length and the accuracy you want and this tells us that when we have an area law or more general or general ground states we can we can find efficient approximations by matrix product states for instance by building at least on paper the full description and then cutting the dimension to obtain a description with a small dimension and not making a big mistake because the entanglement is bounded because there's mid coefficients decay rapidly okay so that's kind of the first message that this matrix product state an sets which is built on the entanglement structure of these states can efficiently approximate states which have an area law and thus ground states of gapped and in fact kind of even critical non gapped one-dimensional hamiltonians so it's a right class of states to describe these systems okay so let me now talk a little bit about in Malek's not much I figure that maybe it qrp people are not so much interested in doing hard pneumatics with matrix products it's but let me try to give you an idea also because it it illustrates some of the concepts and nice advantages of these states how to do pneumatics and the general idea is well if we want to do pneumatics we want to well extract properties of the state right well first of all we want to find a ground state which means we want to compute the energy and minimize the energy want to find the state with the smallest energy which is very close to the ground state because it approximate them well and for all these kind of things also for the energy the Hamiltonian is local we have to be able to compute expectation values of local operators so what this means is we have an expectation value of that and what graphically this looks like that well why well this is psy right this is beside that well well the bra vector the bra vector has just conjugate elements so I contribute a here and then I have to multiply upside with Oh always a local operator it's also a tensor so say it's a two-body operator so I just have to multiply it here which again means I'm summing over indices and here I put the identity and then I put the opposite guy right so most indices of khatyn bra just directly connected identified because I don't accuse anything there and here comes an operator so now how can we compute this thing well one idea would of course be to say oh well we can compute psy and then we multiplied with the operator and then we take the expectation value now that's not a smart idea because the whole point of looking for for a nice description was to not have to deal with the exponentially big Hilbert space right so the next thing we do is to just write this vector in that exponentially became word space we lost all the advantage we had so we better don't do that so what we do instead so this would have been slicing the thing horizontally right so slicing it horizontally is a bad idea so what we rather do is we slice it vertically and if we look at one vertical slice what this is it's just something which goes from two indices on the left I can write a double index to two indices on the right so basically it's just a big matrix which maps a d-square dimensional space to a d square dimensional space I can do this everywhere so I get all of these matrices here and it turns out well this is also a matrix it's slightly more involved but still it's a finite object for a fixed D so all I have to do is I have to evaluate these matrices which are just a simple tensor contraction it's all it doesn't scale the system size right and I have to multiply these matrices so it turns out that the whole problem of computing expectation values boils down to computing a product of matrices and the main object involved is what is called the transfer operator which is just one horizontal slice here of this guy I will say a little bit more about that in the next slide and what's the complexity of matrix multiplication multiplying two matrices of size a times B and B times C has complexity a times B times C so well that's just a different way of writing that as a formula what I wrote graphically I just have to multiply this matrices okay some places are gone apparently that's also a trace here so what it means it boils the computing this expectation value boils down to multiplying matrices of size d square times d square which means that each of these steps takes d to the 6 and there n of them right so it scales nicely both in D and in N and it turns out actually if you really think properly about it you can do it with D to the 5 and for open boundary conditions T to the 4 because it's vector matrix multiplication of matrix matrix 10 this is 50th well something is wrong ok oh oh sorry I forgot the question time all right thanks right so in either periodic or a boundary condition case it's case nicely with the system size so so let me just say one word more about the stands for operator which was this operator here so so it's actually a very useful thing to also analytically study the properties of this matrix product States so for instance if we think of a translation invariant system we want to compute correlations between two points on our system what we have to look at again is a picture of that kind and again we can cut it into these horizontal vertical slices and all we have to do is because it's translation invariant all these es are equal so we have to multiply all of these guys and here we have to multiply basically infinitely many maybe of these guys because it's a very very long chain so what this means is if you start from some operator E and we look at a spectral decomposition of it we take the elf power while the earth power is just well the eigenvalue to the elf power so we see that if there is one largest eigenvalue all other eigenvalues will get suppressed if we iterate right exponentially so this tells us a lot about the behavior of correlations right because it tells us if there's a unique largest eigenvalue of this guy the correlations in the state will decay exponentially with a distance and the rate of decay is actually governed by the ratio of the first and the second eigenvalue so just knowing the spectral properties of this operator tells us a lot about the behavior of correlations if it's degenerate it will tell us their long-range correlations is the constant over long distances even further actually if you if you look at this picture one way to look at this picture is to say you have a tripartite quantum state with this this and that index described by this matrix or tensor a and you trace out this AI system right so it's you have taste other systems are going back to the original state from this operator is like a purification and we know that purifications are uniquely determined up to a unitary on the purifying system so it tells you that it contains pretty much all information about your state in the first place up to local unitary transformations up to a basis choice so really this e carries all the information about the state and that's another nice leg to go to other areas of point information namely if you look at this a this is this e this is just a matrix describing a quantum channel with cows operators a I if you wish the the quantum channel has an input on the left it takes a density operator here and maps it to a new density operator so it also tells us that understanding the correlations of behavior in these kind of systems is pretty much equivalent to studying say information loss through quantum channels or generally properties of quantum channels capacities and things like that so there's lots of close links okay so how can we numerically optimize I told you we can we can efficiently compute expectation values we know that it's a right family of states so we need a way to find the optimal state by minimizing the energy and well this can be done in a variety of different ways so we start with some initial configuration and we start optimizing these tensors and the main observation in these so their various methods to do this but the main point maybe that's what I should stress is that whenever I fix all the ace except for one I look at one side then the expectation value of the energy which I want to minimize is a quadratic function inside because besides linear in each of these tensors a and quadratic optimization problems are easy to solve right it's an eigenvalue problem so for instance I can do what is known as a density matrix on Immunization group which was invented before matrix product states pretty much but it can be understood in terms of them which is we start with some initial configuration we fix all guys except one we minimize the energy with respect to the first we go to the second and so on and we go through the lattice but one can also do other mess from could continuously try to minimize all of these guys or do hybrid methods combine them and so on now I should say from maybe more computer science in point of view that it's not the rnt to converge actually it's it's guaranteed that there are cases where it won't converge but of course this doesn't hinder a numerical physicist to still use it and to successfully use it because you know these typical cases which are hard like they're np-hard instances you really have to construct them by hand and if you go out in nature and try to try to simulate the quantum system you will not find these systems probably but it tells you you have to be a bit careful it turns out more recently actually from the point of information community there has been approve ibly working algorithm which probably converges it's however it's polynomial time it's case all nicely on the other instead of practical so I'm that's actually an interesting question to come up with a provably working method which practitioners actually might use because it's competitive with their scaling okay so second message matrix products its form a basis for variation numerical methods they are very powerful and then the method actually to simulate one dimensional quantum systems but we have like five minutes or okay questions if I should start a well let me start that I can still repeat it after the break once more but this way you have heard it and can digest it through the break and so I would like to turn towards constructing exact wave functions and try to see what kind of wave functions we can build using matrix product states not to variation we simulate a given Hamiltonian but to set up a model from scratch and to try to build something which has exciting physics and what I would like to introduce you the so called egg here T model aka T stands for F like Kennedy levantus sake who invented it and I will again use a quantum information picture of using entangled States and linear maps and my entangle State will a singlet state it will be a spin 1/2 or a two-level system a cubit and I will start in a singlet state and let's sing let's have a nice properties that are invariant under all unitary rotations perform the same way on both sides if you think in terms of spin it's because it's been 0 or because it's the anti-symmetric state so I start with this with this invariant state here and what will be my linear map where bilinear map will take these two spin 1/2 systems and where these two spin 1/2 systems I can decompose into a spin 0 space and a spin one space or if you wish I have 2 pew bits and together they have a symmetric subspace and an anti-symmetric subspace the symmetric one that's been one that three dimensional the anti-symmetric is one dimensional and my map p will actually be a projector in this case and will project under the symmetric subspace or in the spin language it selects a spin one subspace and the great thing is that what it will do is if I act on the initial auxiliary systems with any spin 1/2 rotation so with any su 2 matrix any unitary it will actually turn Slade to a rotation on the physical system by the corresponding spin one of the presentation so whatever matrix U I put here I get something which transforms like a representation of SU 2 also on the physical leave your freedom that's clear because we pick the subspace which that which is exactly the spin one which transforms nicely so we have something which transforms nicely under rotations and a map which also transforms nicely under rotations and now we put the two things together we start from the singlet States we apply this map here well it's kind of the same formula as before and that's what is known as the a klt state and this is a state on a chain of spin one particles of three level systems but on the three level system there's a natural action of the rotation hoop right which is pretty much like rotating the spin in space and well you can write a formula but but you should really think about the intuition the intuition is that whenever on the physical system every where on each side I apply a rotation V V of U so some physical limitations a physical spin this translates in a rotation applied to these guys here so I can push it through the P so I can push a rotation here down to our tation here so I get rotations on these singlet States but there were tasty invariant right so overall I've managed to write to obtain a rotation invariant State forget the formulas right the police I have a well-behaved object I combine it with another way we have the object I get away I behave the object right so one transforms nicely the other one transforms nicely together they transform nicely so we get a rotationally invariant chain just by by combining these object and this tells us we can construct states which have a nice symmetry by encoding the symmetry locally right we don't have to think oh how will we get a symmetric state on a global scale it's a complicated global problem no you start with an initial state which has a symmetry you want then you put a second you put this operation which keeps the symmetry you want and you will get something which has a symmetry you want right let me do one more slide and I think that it's probably a good time for us to stop so what can we say about this state like for instance could this state show up as a ground state of some interesting Hamiltonian and well to this end what we can do is we can look at this state and look at two adjacent sides in the system and ask well what can we say about the state on these two sides so I will talk about in the spin language now if you don't like the spin language well under next few slides which probably means after the break I will explain the same idea in a language which makes no reference to spins at all which will be more general but in some sense less physical if you wish so if you look at two sides on the spin one chain what can we say about these two sides of spin one particles but it's two spin one particles we know that two spin one particles together have spin 1 times 1 which decomposes as one spin 0 once then 101 spin to space so ok now what do we start from here we start from well so that's the state oh - so what can we say about the state here well it lives in this space but then also what we can say is what do we start from here well this left half chain we cannot say much right it's in tank with the next side but we don't look at the next side so the best thing we can say it's a spin 1/2 particle the right one is also a spin 1/2 particle but these two guys in the middle they come together and they come in a singlet state in a spin 0 state so we know there has been zero we have extra information so we start here and we start in one spin one half another spin one half and in a spin zero and then we apply a map which doesn't change the spin remember we just projected onto a specific spin but we didn't change anything right so what this means is we start with 1/2 0 and 1/2 is a maximum spin we can get this way spin 1 there's no way of getting spin 2 in the end so this subspace can never show up right this state here will always only have spin 0 or 1 it will never have spin 2 so there's a non trivial constraint on that state which means that now we can write a Hamiltonian on these two sides which gives a higher energy to spin to for instance a projection to the spin 2 space because that's what a schmee variant if we do that it means that this a ket state because the reduced state always has been 0 or 1 will automatically have energy 0 and other states hopefully have a higher energy so first of all the AKS T State is actually a ground state of that interaction and we put this Hamiltonian everywhere now of course ID on all sides so we sum it over all sides so this is a ground state with zero energy is the Hamilton is positive semi-definite so well it's a ground state right because zero is the smallest energy it has and by construction the state has energy zero and because it's a projector on to spin to it inherits of course spin rotation symmetry we can explicitly write it it's not so important the conceptually important thing is that it's a it's a rotational invariant Hamiltonian which we can set it from the ground it's the opposite approach from usually usually you have animal tone you get the ground state here we build the grounds that we get a Hamiltonian which has a scrounge state and the cool thing is and I won't show this because then well the certain part is certainly very involved one can prove two things first of all this state is a unique ground state for that Hamiltonian there's no other ground state and certain this Hamilton has a gap above the ground state so it's it's a stable a protected phase and well this is interesting for a few things which I will explain in a few slides which means after the break and then well thanks for now we have time for a couple of questions in the contractions if we have them isn't it like if we have enough and matrices and want to multiply them all together and they're all the same matrix can't we use a matrix power algorithms to get lagarith n instead of like D to the six times and e to the 6 times log n o for the transfer operator if they're all the same yes absolutely but this was more of a numerical point of view where you typically don't what to choose them all equal because this makes a problem very nonlinear but you can you can still keep them equal and then indeed you would sort of use an eigen solver for sure more questions it's in the bottom I have a question about the energy and it optimize matrix one by one so it may capture captures in local minimum when it it's a code it goes global minimum and it is it no no it can get second local minima and I think even even for actual physical systems you have to be a bit careful in some systems to not get stuck but I think the case I mean you you can also set up hard instances where dear Raji actually got stuck but these are pretty contrived and this you won't find in actual simulations and actual simulations if to be a bit careful in some instances people have some tricks they don't just put in my single site they might block two sides and I mean there are some tricks you have to be a bit careful but generally it's very well behaved but but that's it's very hard to prove it's well behaved right that's not going to prove it's more a practical approach based on the fact that you don't expect that if you go from 100 101 spins and something very funny will happen thank you right X questions here a few slides back you had a picture of bipartition and you point out that when there's more disorder that that equals more entanglement if I have a starting condition which is in a bose-einstein condensate and I cut that in half what's that going to look like in terms of increasing the entropy I'm not well probably we should discuss it offline because I think we suppose answer a couple of subtleties so I'm not sure if I'm not sure if your question specifies a problem well enough to give a clear answer I mean first of all that both are indistinguishable so you have to say what you mean by entanglement right and it could mean different things otherwise I think if you make a spatial cut it in principle if you look at spatially separate modes and not in time go to bosons but the prosonic modes it should be very similar but there probably some subtleties and I mean if you're asking us about a specific PC state we probably have to think about the trap potential and everything to know what the left and right half is right yeah umm yes particularly thinking about that at low temperatures you're going to have state arrays will will frequently find themselves in a bose-einstein condensate and that when you do this split you're going to now going to wind up with two condensates which then have entanglement but at a higher level and it's just wondering what your thoughts were on that issue well I'm not sure exactly what the question is in the sense that well I mean you can look at the integrant in that cut for sure I guess it's kind of like splitting up a coherent state so it's kind of a Poisson akin tank stay for which there's a theory but I think you have to look more closely at a specific scenario if you want if you want concrete statements I think it's better discussing offline probably let's take one here so you mentioned the presence of an algorithm that probably works what are the assumptions signaled you mentioned that there is a probably probably working algorithm that that takes polynomial time what were the assumptions involved there and I think they only if I remember correctly they use a gap for the Hamiltonian and that's it okay so the np-hard cases have a vanishingly small gap whether it's some types of NP hard cases or some which have an algebraically small gap they're like like we have instances of cases where we know that just finding a specific matrix products they were promised it's a matrix product stay at the ground state is exponent is np-hard then of course no method will find it because the Hamiltonian doesn't have a gap on the other hand in V MRG we can certainly construct cases where we say okay in for a gapped Hamiltonian in the course of the optimization we might encounter certain configurations where the involved optimization problem gets stuck and and that's not because finding the solution is would solve a hard problem but kind of the way you do it right this is susceptible to getting trapped in local minima so the different ways and of course we can continue asking different questions and come up with different anti hardness results in that case there's a gap involved so for gaps we know that the ground itself shouldn't be np-hard I mean that that's indeed show that the it's not np-hard thanks if going back to the a matrices and the early slide if you restrict those to be diagonal so that they commute then you get a set of states that is very familiar to the algebraic geometers those are the secant varieties of su gray varieties and they also have this nice physical property that the dimensionality of one has kind of dissolved because all of these sites commute with each other so you're not committed to 1d anymore so my question just is is there is there literature and if so what is it that kind of links up this wonderful presentation that you've given to the classical literature of algebraic geometry I'm not that we're on the spot but I'm pretty sure people have looked at that but I would have I would have to search myself I'm uh I'm not aware right now but I have the vague feeling having seen something like that yes there's a quick my Landsberg that especially I had in mind but it it's not so easy to link his book up with with this lecture well it's worth trying all right let's stop here and let's give norbert a big hand again [Applause]
Info
Channel: Microsoft Research
Views: 2,580
Rating: 4.9148936 out of 5
Keywords: microsoft research
Id: W_IBEAzqm4U
Channel Id: undefined
Length: 64min 52sec (3892 seconds)
Published: Tue Jan 31 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.