John Preskill “Holographic Quantum Codes”

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
welcome this afternoon to the Yale quantum Institute I'm Rob told Hoffman the director Institute and it's my pleasure this afternoon to introduce the speaker for the third of this year's leadpage lectures so if you've been following along you know that we're very honored and privileged to have John Prescott here giving this series of lectures so John is the Fineman professor of physics at Caltech and the director of their Institute for quantum information in the matter and John has many accomplishments early in his career he worked in string theory and quantum gravity and more and more these days he's been working in quantum information now for many decades and also in the connections that we were there for yesterday's lecture between these things so John has received many awards he's noted for his pedagogy there's a long list of similar invited lecture series and so on that he's given many places and we're very glad to have him here john is a member of the National Academy and also he's contributed really fundamental things to quantum information theory together with Daniel Guzman they had some really seminal results in the fault-tolerance which is an important thing that was sort of the final nail saying theoretically one can really build eventually large-scale quantum computers and so welcome again John and we're looking forward to today's lecture about space-time and quantum error correcting colors well thanks very much Rob it's been exciting today to be visiting the Yale quantum Institute where such great science is being done I'm going to be talking today about some work I did with some brilliant postdocs Benny Yoshida and Fernando pesto skee who were until recently postdocs at Cal Tech and Daniel Harlow who's at Harvard and visited Cal Tech which is what instigated our collaboration it was a very enjoyable collaboration and fittingly the author's names spelled happy now as you've heard in my previous lectures if we look at the field of quantum information science we might take the view that we are in the early stages of the exploration of a new frontier of physical science what we could call the complexity frontier or entanglement frontier we are developing and perfecting the tools that we need to build up complex quantum states highly entangled States too complex for us to be able to simulate them with our digital computers beyond the reach of most of our theoretical tools for understanding how these states will behave how to describe them and that's going to be important for future technologies but it's also important in giving us new ways to explore basic questions about physics and we can expect that the ideas that we're developing about quantum information processing are going to have a broad impact on other areas of physics and one area in which we've been seeing that happen in the last few years is quantum field theory and quantum gravity where there's been a growing realization that concepts pertaining to quantum information like entanglement and quantum computing and quantum error correction are relevant to problems that are of interest to field theorists and quantum gravity theorists and what I want to talk about today is a connection between two important ideas in physics two of the most amazing ideas really that I've encountered in my scientific lifetime so one is the holographic correspondence holographic duality between bulk space times and boundaries of space times the holographic duality it's the best handle that we have on understanding quantum gravity non-perturbative lee and the other is quantum error correction which is the basis for our confidence that we will be able to realize scalable quantum technologies in the not-too-distant future and I'd like to argue that these ideas are in fact closely related that we can think it is fruitful to think of the holographic correspondence and quantum gravity as a realization of quantum error correction and this idea which we developed in the paper built on an inspiring early paper earlier paper by O'Mara dong and Harlow so let me remind you of some of the things I said in yesterday's talk about the holographic correspondence it was a particularly important for helping us to understand the issue of what happens to information that's processed by a black hole while the Cena in formulating this correspondence in the 90s really showed us how to put a black hole in a bottle in effect the interior the bottle is what we call the bulk or 80s space and negatively curved space-time the walls of the bottle are what we call CFT for conformal field theory and according to the correspondence there's an act relationship between physics living only on the boundary and evolving according to some local Hamiltonian of a quantum field theory without gravity and the gravitational physics in the bulk describing in classical limit described by classical general relativity but also including quantum correction so the space-time in the bulk can fluctuate and evolve black holes can form and evaporate in particular and if we think about what happens to information that enters a black hole horizon then the black hole eventually evaporates if we look at it from the boundary point of view where there's no gravitation and no black hole it seems manifest that information is preserved that the dynamics is microscopically reversible and nothing is lost from the universe so at least in this case which is the situation where we understand quantum gravity the best it seems that the formation and evaporation of a black hole does not destroy information now that isn't to say we completely understand what's going on this correspondent still hasn't given us a very concrete picture of how information escapes from inside a black hole and it's not clear exactly what the boundary physics is telling us about the experience of an observer who falls through the horizon and enters the interior of the black hole so there's certainly certainly many open questions but this has encouraged us to believe that in the correct theory of quantum gravity black hole evaporation is a microscopically irreversible process so what is this correspondence I've tried to depict it in this picture in a rendering in which the bulk space-time is two plus one dimensional so there's a two dimensional slice through the bulk and to indicate the negative curvature I've represented that two-dimensional disc as a Poincare a disc the colored regions in the picture are geometrically the same size but they appear to be smaller and smaller as you get closer and closer to the boundary that's a way of depicting that negative curvature and according to the correspondence there's an exact relationship between the states and the observables in the bulk and on the boundary so there's a one-to-one mapping of States to States and observables to observables between the two descriptions and in the correspondence the way we usually use it and think about it relates weakly coupled gravity which is nearly classical relativity but with some small quantum Corrections to the ball in the bulk to a very strongly couple field theory without gravity on the boundary and the dictionary that relates the two descriptions is very complex so in particular the observables that would be potentially measurable by observers in the bulk who are working locally the local observables those local observables deep inside the bulk appeared to be very non-local when rendered in the boundary description now one way of thinking about the extra dimension in the vault relative to the boundary is that you can view it as a scale in the boundary Theory the boundary theory is a conformal field theory it's scale invariant and as we go deeper and deeper into the bulk that corresponds to viewing the boundary physics at longer and longer distance scales or at lower and lower energy but it's still a highly non-trivial thing that the bulk physics would look essentially strictly local in the limit of semi classical gravity even on scales which are short compared to the curvature scale of this ABS geometry and what the recent developments seem to be teaching us about this correspondence is that we can think of the geometry in the bulk as arising from or being describable by entanglement in the boundary theory and that's mostly what I want to talk about how we can think about geometry of space-time as an emergent property related to quantum entanglement but let me say a little bit about quantum error correction because I'm going to make you so the idea of a quantum error correcting code in this construction and I just for one slide want to explain the basic idea of how quantum error correction is supposed to work to control errors in a quantum system so let's imagine we have some quantum system let's say it's n qubits and it interacts with the environment with the outside world in some way and I can describe up that by some unitary transformation that acts on the system and the environment and without loss of generality I can expand the action on the system in terms of some complete set of operators that act on the system those are the EAS in this picture where each one of those operators is associated with some corresponding corresponding state of the environment but I'm not assuming here that the states of the environment is rather bad notation actually are necessarily mutually orthogonal or normalized so there's no less most loss of generality and what I've written down this could describe either decoherence if states of the environment are mutually orthogonal so the system becomes entangled with the environment it could also describe unitary errors unitary transformations acting on the system due to imperfect control of the system if all the states of the environment were actually in the same state and then our task is to reverse the damage to undo this effect of the noise on the system which might have caused it to become entangled with the environment and we can't really undo the fact that the environment became entangled with something because we don't control the environment but what we can do is transform the entanglement of the environment with the system into tangle meant of the environment with some and silho which we control in our laboratory and then that insula can be discarded and refreshed and used again in future rounds of error correction so this process is a kind of refrigeration if you like due to the noise entropy leaks into the system it heats up and we want to flush the entropy out and that's what the error correction procedure does if I want to eventually clear the insula and reuse it I have to erase it that's a thermodynamically irreversible process for that I would have to pay a power bill but in doing so I can remove the entropy from the system and clean it up and performing many rounds of error correction maintain it in some delicate superposition state for a long time now this isn't going to work for arbitrary errors acting on the system the errors have to be of a restricted type for it to work what we usually suppose is that the noise is fairly weak and weakly correlated so that in this expansion / error operators the ones that occur with significant amplitude are ones of low weight which act on a small fraction of all the qubits among the end and that's what would happen for weak weakly correlated noise so we want to be able to reverse errors of that type but we won't be able to do that for arbitrary states of our n-cubed system we can only get it to work for some protected subspace which has to be cleverly chosen so that the error operators act trivially on that subspace and so we can undo the damage and so that's a quantum code a quantum error correcting code which has been chosen so that the errors are reversible if there are sufficiently low weight and what I want to describe to you is a construction of a new class of quantum codes we call them holographic quantum codes which realize some of the features of the holographic course this and which I think give us some insights into the nature of that correspondence so you can think of this as a kind of tensor Network realization of the holographic principle so by that I mean we can imagine tiling a negatively curved space with polygons and associating with each of those polygons some tensor and that tensor has two kinds of indices some dangling indices which we can think of as the degrees of freedom on which bulk operators act and also some internal indices which are contracted with the indices of neighboring tensors and so we have an addition uncontracted indices all the way on the boundary and we want to think of those uncontracted boundary indices as the physical variables of some boundary theory and the dangling indices in the bulk as the physical variables of the bulk theory and what this tensor network actually defines as well discussed is an embedding of the bulk degrees of freedom in the boundary hilbert space which is a kind of quantum error correcting code which is well protected against a certain class of errors so the virtue of this type of description is it's quite concrete we can describe the dictionary quite explicitly we can compute properties of the code but it's only intended to be kind of a toy model of holography it's not really the real thing it's something that we need to flesh out further to make a richer connection between quantum error correcting codes and the holographic correspondence now I should emphasize that this dictionary is not complete I said that there was a one-to-one correspondence between the bulk theory and the boundary theory but that's not what I just described I described an embedding of the bulk Hilbert space as some subspace in the boundary Hilbert space so we we're not taking into account all of the observables all of the degrees of freedom of the bulk the way to think about it is that we're interested in how to describe in the bulk the operators which when acting on the boundary theory map the low energy states to low energy states this code space corresponding to the bulk is appropriate for describing smooth perturbations of the ad S geometry and that corresponds on the boundary to operators that map states of low energy to other states of low energy and if I wanted to include higher energy states I'd have to go beyond the code space that we're talking about and I'll come back to that later so let's think a little bit about why it should be that local operators in the bulk in this correspondence correspond to very non local operators on the boundary one way of thinking about it is this that we can imagine acting deep inside the bulk space-time with some local operator and the bulk Theory obeys at least to a very good approximation relative relativistic causality so the effects of that bulk operator will not be felt outside the future light cone of the event where that local operator was applied in the ball and that light cone reaches the boundary only at a much later time so at the time that we applied the local operator the effects of it are not at least easily visible on the boundary and so what's happening is that when that bulk operator is applied on a particular spatial slice there is some corresponding operator being applied on the boundary but it's extremely non-local its effects can't be observed by local observers who are confined to a small portion of the boundary at least not right away but as the system on the boundary continues to evolve eventually the effects of that very non-local operator become local observable and that corresponds to the time in which this light cone reaches the boundary but at the time that the local operator is first applied there's some very non-local operator on the boundary that it corresponds to and we'd like to understand that correspondence better and I'm going to tell you a few things in the next few slides about this ad s CF T correspondence because we would like to see how these features are realized in the codes that I'm going to construct so in this slide there's a lot of information maybe that we don't necessarily need but the upshot is this that suppose I consider some part of the boundary I called it region a here along the boundary and some point in the ball and I'd like to know whether the operator applied to that point X in the ball can be reconstructed on region a on the boundary and from what we know about the correspondence we can make the following statement that associated with region a if I look at the two endpoints of the region there's some shortest path the geodesic through the bulk that connects together those endpoints and because of the negative curvature that shortest path dips deep inside the bulk because that's the fastest way to get to the other end of a to go through the bulk rather than along the boundary and the region that i've colored in green between a and this geodesic is called the causal wedge of region a and if the point X lies in that causal edge of a then the operator in the bulk applied at X can be reconstructed in the Boundary Theory on region a and the way one shows this just so you know is that we can imagine propagating the classical field equations in the bulk if we use the leading classical approximation in the ball those field equations are causal in the radial direction and so if I want to know whether boundary data on a is sufficient to tell me about the effect of the operator plate applied tax the question is whether a contains the past light cone of X but where the past is now defined radially on the boundary and so there's some region here which provides sufficient data to determine X on the boundary it's not all on one time slice but then we can use the Heisenberg equations of motion on the boundary to squeeze it down to the time slice and that means that we can reconstruct this operator just on the boundary if X is in the causal wedge of a now you can see that this is an ambiguous reconstruction on the boundary because there are lots of ways in which I could choose the causal wedge for a point X in the ball so for example suppose that I divide the boundary up into three regions which I've called a B and C and consider some point deep inside the ball now that point is in the causal wedge of the union of a and B and it's in the causal wedge of the union of B and C and also the union of a and C so according to what I just told you about reconstruction this bulk operator Phi it should be possible to reconstruct that on the boundary as an operator which is supported only on a B and that means it would commute with any operator that supported on region C but likewise I can reconstruct it on BC which means it commutes with any operator localized on a and I can reconstruct it on AC which means it must commute with any local operator supported in B but that seems to mean that it commutes with all the local operators on the boundary but that doesn't really make sense because the field algebra of the theory on the boundary is irreducible and the only thing that commutes with all the local operators is the identity so that would mean the only operator we can reconstruct on the boundary on the identity is the identity and that that can't be right so this was this puzzle was discussed in this earlier paper by ole Mary dong and Harlow and their proposal was that in fact these three different reconstructions on a B BC and AC really are physically different operators in the boundary theory so in what sense can we say that there are reconstructions are the same bulk operator well they act on the code space in the same way there's some code subspace of the boundary theory but there are many ways of physically representing an operator acting on the boundary degrees of freedom that acts on the code subspace in a prescribed way and these different reconstructions correspond to physically different operators on the boundary but they act on the code space in the same way and what we wanted to do was to construct codes that realize this idea concretely so another thing you should know about as I tell you about the properties of the code is the distinction between what's called the entanglement wedge and the causal wedge you can you can understand that distinction in this example so consider a boundary region which has two connected components which I've called a1 and a2 and so a1 has some causal wedge colored in blue here and a2 has a causal wedge but suppose regions a1 and a2 are sufficiently large that if I tried to find the minimum length geodesics that connect together the four end points of a1 and a2 the shortest way of doing so connects the endpoint of a1 with an end point of a 2 here and here and then the causal wedge is distinct from the entanglement wedge is the region contained between a 1 and a 2 on the boundary and these two minimal geodesics in the ball so that's the region colored in blue here on the right hand side the entanglement wedge is bigger than the causal wedge in this case and the ADEs CFT folklore is that if a point lies in the entanglement wedge of the union of a1 and a2 then it should be possible to reconstruct that operator on the union of a1 and a2 ok well that doesn't follow just from the construction that we already talked about which only guarantees that we can reconstruct an operator in the causal wedge of a1 on a1 and in the causal wedge of a2 on a2 so we'd like to understand how this can be true of holography in our code construction this is only for hyperbolic space right well so how do we go beyond hyperbolic space it's it's much harder and there are ideas about how to do holography for de sitter space for example and asymptotically flat spaces which I'm not really going to talk about it's not nearly as well understood and the code construction that I'm going to discuss really only applies to the hyperbolic case the easiest case incidentally if you'd like to know why people say that we should be able to reconstruct operators in the entanglement wedge of a disconnected region that I should be able to reconstruct a local operator here on the Union of a1 and a2 for example one reason people say that is because we can ask what are the effects of entangling degrees of freedom in the bulk on the entanglement on the boundary and there are convincing arguments that if I consider 2 degrees of freedom in the bulk one inside the entanglement wedge and one outside that if I entangle those two that contribute to the entanglement on the boundary between a the union of a1 and a2 and the complementary region outside the union of a1 and a2 and that really only makes sense if I can define operators which are supported on a and on its complement which can detect that entanglement between a and its complement so operationally in order for that bulk entanglement to have a sensible interpretation we should be able to reconstruct local operators in the entanglement which and we'd like to understand why that's so I mentioned yesterday the holographic entanglement entropy the correspondence between entanglement on the boundary and geometry in the bulk it goes like this I can consider some region on the boundary some connected region in this case and I could ask for the state on the boundary how entangled is region a with the complementary region that can be quantified by the entropy of a if I trace out the complementary region I get some density operator for a which in general will be mixed if a and its complement are entangled and the greater the entanglement the higher the entropy of that density operator it's a measure of how much information is missing to an observer on a because it resides in the correlation between a and its complement and in order to describe that entanglement geometrically I consider the geodesic in the bulk which connects the endpoints of region a and Express the length of that geodesic in the suitable gravitational units and that's the entropy or in higher dimensions where let's say when the bulk is has a three-dimensional spatial slice this would be a surface of minimal area connecting together the boundary points of region a but through the bulk and the relationship between the area of that minimal surface and the entropy is exactly the relation ship between entropy and area for a black hole relating the entropy of the black hole to the area of its event horizon so because of the hyperbolic geometry this minimum length geodesic or minimal surface will want to dive deep into the ball he wants to cross the minimal number of colored regions and the way to do that is not to follow the geometry near the surface but to dive deep inside the bulk okay so I'm going to explain how we construct these codes and then we're going to see why they have desirable properties like the holographic correspondence between entanglement and geometry reconstruction in the entanglement wedge and reconstruction in the causal wedge and the key ingredient in our code constructions is something called a perfect tensor okay so here's what that means let's imagine we have a pure quantum state of two n particles two n spins each of which is V dimensional and I can expand that peer state in a standard basis for the two n spins and that defines a tensor which has two N indices each of which takes V possible values in the pictures that I'm going to draw for definiteness I will be considering six spins each of which is two-dimensional six cubits and when I say that the tensor is perfect what I mean is that we can take those six cubits in this state and partition it into any three cubits and the complimentary three and no matter how we choose those three the two halves of the system will be maximally entangled with one another so if I trace out three of the qubits any three the remaining three will be in a maximally mixed state okay it's not obvious that such states exist but they do in particular there are such perfect States for six cubits okay now there are other ways of thinking about these perfect tensors or perfect States just by transforming ket's into bras I can obtain from the perfect tensor instead of a state with six cubits a map like a unitary transformation because of the perfection it will always be unitary up to normalization if I pick any three of the qubits it defines a unitary map from those three to the complimentary three or I can consider it to be a mapping of two qubits to the remaining four or one to five and in this case since this is unitary this will be an isometric embedding so here for example I'm describing a single qubit being mapped in an inner product preserving way into the Hilbert space of five cubits and in fact these mappings of two qubits to four and one qubit to five are well known examples of quantum error correcting codes which protect protect against erasure of some of the qubits now what does that mean so when I say that we can protect against erasure what I mean is that if an error occurs in which some of the qubits become inaccessible they're lost I can nevertheless decode the qubit that's embedded in the code space without having to access the erase qubits I'm able to use the information of which qubits disappeared I know which ones were erased and if I have that information then I can reconstruct the encoded system so in this case our embedding of one qubit into five defines a quantum error correcting code with one encode Cubitt embedded in a block of five which is protected against erasure of any two of the five cubits if any two are erased I can still correctly decode the protected qubit so why is that true well we can think about it this way and see that it follows from the fact that this 1 2 5 map is related to a perfect tensor so suppose I consider the qubit that's going to be embedded in the code block to keep track of what happens to it I imagine maximally entangling it with some reference qubit which I denoted R then there's a block of five cubits and any two of those I can imagine our erase those are the ones shown in red but because the tensor is perfect that means that the system consisting of R and the two erased qubits is maximally entangled with the 3 on erase qubit shown in green so that means in particular that the reference qubit is a subsystem of these 3 on erase qubits no matter which three we choose now the information is redundant ly encoded in such a way that you can take any three out of the five cubits it doesn't matter which three and you can recover the encoded operation and in fact if you want to apply analogical operation any transformation to the encoded qubit it suffice it is to perform an operation that just acts on three of the qubits and it doesn't matter which three any three will do so now what's a holographic code well you can think about it this way let's suppose I take a hyperbolic geometry and I cover it with Pentagon's so this is a tiling by Pentagon's that I couldn't do on a flat plane but which I can do on a negatively curved two-dimensional surface where four Pentagon's all meet at a vertex and now associate with each one of those Pentagon's this map that takes one qubit to a block of five the quantum error correcting code that I just told you about okay and so let's imagine that we start right in the center of the bulk so we have one qubit that we would like to be protected and we encoded in this block of five but then working a radially outward from the center I consider a sequence of maps in which I use either my two to four high symmetric embedding or the three to three unitary math to take the incoming qubits represented by the black lines and any dangling qubits represented by the red dots and map those two outgoing qubits which are shown as black lines lying further radially outward so I've composed together many maps each of which is either a unitary or a nice symmetric embedding and the composition of these asymmetries is also an isometry and so what we wind up with is a isometric inner product preserving map of all of the bulk variables shown as red dots to the uncontracted indices on the boundary of this tensor Network so we are to think of these uncontracted legs on the boundary as the physical qubits of a quantum code and all the red dots in the bulk as the qubits of space which is embedded in the hilbert space of those boundary qubits and that's what I mean by a holographic coat now I can also consider other tilings I can also thin out my bulk degrees of freedom if I wish as I've done here that's just another code in this case I've tiled with Pentagon's and hexagons the hexagons don't have any bulk indices they only have the internal indices that are contracted and associated with each Pentagon is one logical index of the code that corresponds to some bulk variable okay so now we'd like to understand this idea that operators can be reconstructed on the boundary when the operator is in the bulk and lies in the causal wedge of that boundary region so I'd like to think about it this way I can construct what I'll call a greedy causal wedge greedy is a word that computer scientists use to speak of optimization algorithms that can be carried out one small step at a time so imagine we start at the boundary and I would like to push a curve on the boundary further and further into the bulk in a sequence of small steps and I'll be able to make such a step if that curve shown as the red dotted line crosses at least three outgoing legs and then I can move the red line one step inward and now think of this extra Pentagon is as a isometric map of these two incoming indices and the bulk and its index to these three and so as I push further and further into the bulk I'll obtain an isometric map from all of the black lines which are crossed by the red cut and all of the bulk degrees of freedom that we swept past as the red curve goes deeper and deeper into the ball that's all isometrically mapped to the ball the boundary degrees of freedom that live in region a okay so I keep pushing inward until I can go no farther and that defines what I mean by the greedy causal wedge and by construction everything that lies within the greedy causal wedge of a can be reconstructed as an operator which is supported only on a so now how should we think about the ryu taki and Nagi relationship between entanglement on the boundary and geometry in the bulk well for region a I can consider some cut through the bulk which crosses the contracted indices in the bulk and now we can think of this as a sum over tensor products of vectors in a and its complement because we obtained a tensor contraction associated the associating the values of the indices along the cut to states of a from the tensor P that we obtained by contracting together all the tensors to the right of the cut and we get a map of all the degrees of freedom along the cut labeled by I through the tensor Q the contraction of everything to the left hand side of the cut into region a so this is a sum of a tensor product of some state in region a which is indexed by I which labels the is one state of the indices crossing the cut tensor dwith sorry I think that's P and region a with Q on the complementary region now in general these states are not necessarily orthogonal or normal but if this tensor P&Q if these are isometric embeddings then these will be mutually orthogonal states and that means that the entanglement between a and a complement the amount of entanglement will just be determined by the number of possible values for I it will have a maximally mixed state in some subspace whose dimension is determined by the number of possible values for I and now one can give a graph theoretic argument showing that if we have no positive curvature anywhere in the bulk then the greedy geodesic for region a will match the greedy geodesic for its complement and so that means we actually have an isometric map associated with this tensor P and this tensor Q and this state is a maximally entangled state of two subsystems one for a and one for a complement the dimension the number of values for a is just the number of values of the indices I along the cut and if the spins along the cut take V possible values that dimension is just V raised to a power which is the length of the cut the number of indices that it crosses and the entropy is just given by the log of that number so it's proportional to the length of the cut and that's the ring for Utah Kannagi formula relating entanglement to geometry so now we'd like to understand this reconstruction in the entanglement wedge the case in which that can be explained maybe most clearly it's let's imagine that there's some procedure or some process on the boundary that in a independent and identically distributed way erases boundary cubed so in other words suppose each one of the boundary qubits is either erased with probability P or left on a race with probability one minus P and then the question I'd like to ask is whether bulk operators which are deep inside the bulk can be reconstructed on the UH Narae skew bits okay but this is a case in which there's a big difference between the causal wedge and the entanglement wedge that I defined earlier because if the erasure probability is small say then the on erase qubits will consist of segments which there will be fluctuations but typically they'll have a length which goes like some something like 1 over P if there's a probability of P for each boundary qubit to be erased and then there will be small gaps between those different regions of on erased qubits but there will be many islands of on erased qubits with small gaps between them and so the causal wedge of each one of those islands will reach only a little ways into the bulk but the entanglement wedge will contain the whole center of the bulk because the shortest geodesics they connect together all the boundary points will just span the erase regions okay so if we erase the boundary qubits with some sufficiently low erasure probability then with very high probability the entanglement wedge is going to contain all the points deep inside the bulk and we'd like to see how the Balch operators have those points can be reconstructed on the boundary well there's a another case which is a little easier to think about the case in which the our tensor network just has the structure of a tree so let me describe that case first so suppose we take our encoding of one logical protected qubit and a block of five and then for each one of those five we encode that in a block of five and then for each one of those five we encode it in a block of five and so on just recursively encoding level by level that's an example of what the quantum coders call a concatenated quantum code it's just a recursive hierarchy of codes within codes and now we could ask if qubits at the lowest level furthest out along the tensor network are erased with some iid erasure probability P can we correct all that erasure and reconstruct the logical information at the center well the code the five cubed code is correct is protected against two erasers out of the five cubits in the block so in order for the information to be array so that the erasure cannot be corrected there would have to be three or more out of the five qubits that would be erased so the probability that there's an uncorrectable erasure in the block of five if the ratio of the qubits occurs with probability P will be of order P cubed and with a combinatoric factor which is just the number of ways of choosing three qubits out of five which is 10 okay so as we work our way inward from the outermost leaves of this tree to deeper and deeper levels into the all the probability that the ratio remains uncorrect 'add drops off very quickly is the probability of an uncorrected erasure at the first level just goes like 10 P cubed but then at the next level 10 P cubed gets cubed and multiplied by 10 and then it gets cubed again and multiplied by 10 and so on so the probability of an uncorrupted erasure if they were all together J levels to the tree falls off doubly exponentially with J like P over some critical error probability to a power which is 3 to the J and every time it gets cubed and that critical error probability according to this estimate is 1 over the square root of 10 so we'd like to do a similar type of analysis for a holographic code but it's a little bit trickier because the code isn't really a tree and that means that although the erasure process might be iid on the boundary we can think of a ratio or different qubits as being independent that will no longer be true as we move in from the boundary then we start to get correlated erase your errors because a single one of the blocks at you know level J can feed into two blocks at level J plus 1 but this isn't really as bad as it might seem because of the hyperbolic geometry that prevents the correlations in the noise from propagating outward very much so at each level as we go deeper and deeper into the bulk we really only have to worry about correlated errors between neighboring qubits and that makes it possible to analyze the problem and show that once again the probability of an uncorrupted erasure error once we reach the center of the bulk will become ee exponentially small in the number of layers in the tensor network although now the power that appears here is a little bit different involves the golden ratio and this analytic estimate gives us a critical value for the racial problem erasure probability which is about 0.08 but if we actually do numerix to see how things go in a case like this where to understand what happens deep in the bulk I just consider the code with one protected qubit in the center and all the other indices contracted then the threshold value of the erasure probability is appears to be one-half and so that means that if I want to be able to reconstruct a bulk operator on some subset of the boundary and that subset is a randomly chosen subset that contains at least half of all the qubits on the boundary then with very high probability the reconstruction will be possible okay so the bulk degrees of freedom are very robust against erasure of the boundary and correspondingly we can reconstruct the operators which are deep inside the entanglement which well because I wanted to show you that it is possible to give an analytic argument that there is a threshold and that means that once the erasure probability is less than point 83 then we're guaranteed analytically to be able to reconstruct the operators deep inside but our you know our analytic estimate is much too pessimistic and the actual threshold is as high as it could possibly be it's 1/2 it couldn't be higher because if we could correct a ratio or more than half the qubits then that would violate the no-cloning theorem because I could divide the qubits into two sets and if I could correct erasure and both of them then I would be able to clone so as we've seen there's something very robust about this relationship between the bulk and degrees of freedom and the boundary degrees of freedom there's something kind of pleasing about that that if we want to think of the geometry as being an emergent property of the entanglement on the bulk we would like the properties of the geometry to not be so sensitive to the exact state on the boundary and once we get deep in bulk the details of what's going on in the boundary become not so important for doing the reconstruction now I haven't talked about black holes but remember I also had mentioned that we don't really have the complete dictionary here because we're only describing some code subspace of the boundary Hilbert space well why is that most of the states on the boundary are very high energy states what do they correspond to in the bulk large black holes the large black holes have many microscopic degrees of freedom and if there's an exact match between the Hilbert space of the bulk and the Hilbert space of the boundary then actually most of the states in the bulk or most of the states in the boundary should correspond to black holes in the bulk so we really want a complete correspondence then we have to include the black holes and that you haven't been part of our discussion so far because our code space has only attempted to describe logical operations acting on the low energy states on the boundary so one way of picturing the black holes is we can imagine cutting a hole in the tensor network and that will leave additional uncontracted indices in the bulk those correspond to the black hole microstates and now what our isometric embedding does is it Maps all of the black hole microstates and the bulk degrees of freedom out side the black hole to the boundary hilbert space so that's the story of holographic quantum codes it is not the full story of holography it's really my attempt to understand holography it's always seems so mysterious that somehow there could be this exact correspondence between two different descriptions of the same physics but it smelled like quantum error correction because of the feature that the bulk observables deep inside the bulb correspond to very non-local operators on the boundary just the kind of encoding that is well protected if the environment is looking at the system locally it can't see that bulk information and so in this construction we tried to make that idea more precise and concrete it illustrates how quantum error correction can resolve this causal wedge puzzle that is that we can reconstruct the same boundary operator in multiple ways because the different reconstructions although physically different on the boundary act in the code space on the same way and we can see how they were utaki and nagi relationship between entanglement and bulk geometry is realized in this concrete setting there's a lot of flexibility in how we do the construction for ease of drawing I've stuck with a two-dimensional rendering of the bulk space we can consider tiling in higher dimensions by polyhedra and again use a perfect tensor construction to get similar results how we can choose different types of tilings and as I mentioned in passing we can if we want to describe lower dimension code subspaces or a thinned out version of the observables in the bulk we can just choose the tiling appropriately in order to do so there's a lot missing from this descript description first of all it's just kinematic it's only attempting to understand the relationship between the bulk and the boundary at some fixed time okay it will be much more interesting to talk about dynamics to consider how our code space evolves according to some local Hamiltonian acting on the boundary hilbert space and that's something we're still thinking about and it hasn't helped us to understand the locality and the bulk on distances that are small compared to the ad S curvature because we've only associated a degree of freedom with each one of our polygons which has a size which is comparable to the curvature radius so as I said at the end of my talk last time I think these connections between quantum information and quantum gravity are very intriguing they're giving us new ways to think about some very hard problems and I think that's going to lead to further progress but we're still in the early stages of exploring these connections just as as I've said we're just at the early stages of the exploration of quantum information science and we have a lot to look forward to as we develop the experimental tools to study highly entangled systems and new ways and I think the most exciting to me message from this connection between quantum entanglement and quantum geometry is that potentially it makes quantum gravity a subject that can be studied experimentally in tabletop experiments because what we would like to understand better is what types of highly entangled systems have some kind of holographic dual description we know some examples of that but I think it's a much more generic phenomenon and we haven't yet understood exactly what the connections are between complexity of a quantum system and an emergent geometry we haven't understood it very well because it's very difficult it involves understanding properties of highly entangled systems that we can simulate very well and that we don't have the theoretical tools to study in sufficient depth so I'm hopeful that people at the Yale quantum Institute someday we'll be doing experiments which will give us insights that we couldn't have derived in other ways from laboratory experiments so thanks it's been exciting to be here at Yale the last few days and I appreciate everyone's hospitality and generosity and thanks for listening to all my talks very much time for today's talking for a really great series and just to clarify would be great if we would be able to do something with a quantum computer that would contribute to black holes but we're not gonna actually build a black hole here so the question was as a practical matter is it possible to turn erase your errors into other types well I guess let me answer a slightly different question than the one you asked so we could ask suppose we have a code which protects well against erasure what can we say about how well that code will protect against more general types of errors and in fact if we can correct against in a block of n qubits if we can correct against 2k erasure errors and by that I mean we know which of the two K qubits out of the end have been damaged and we can use that information to recover then that same code can be used to correct against cares in the block when the cares occur on unknown qubits so the codes that correct against erasure can also be used to correct against more general errors but the air correcting power is reduced if we don't know which are the qubits that are damaged the size of the boundary and physical given to the size of the circle to the size and physical well remember it's a hyperbolic geometry so sorry the question was it seems that we have embedded the bulk in the boundary but shouldn't the boundary be smaller than the bulk so in fact in this hyperbolic setting the boundary should not be smaller than the ball kids part of the reason that our understanding of holography is a most complete in the case of anti-de sitter space so we can have an exact correspondence between states living on the boundary and States living in the bulk because in a hyperbolic geometry you know the boundary and the bulk actually have if you discretize them or tile them they actually have comparable cardinality Elementary equations the same time how about the distinguish but from the boundary if you have a discrete system right so you're talking about some kind of a network basically Ryan's sites these are a personal way to distinguish about from boundary well so you mean if I if I just gave you the graph of the tensor network how would you know well in the case of of this construction I think there is you know an unambiguous unambiguous way of doing that because of the structure of the geometry that we tie in general of course it would not be easy the thing to do if I just gave you a graph it would be rather arbitrary to say what was the boundary and what is the bulk but you know if you like there's a graph and I am going to use that graph to describe a relationship between two Hilbert spaces and I will by Fiat say I'm interested in these as the boundary qubits and regard those as physical and the others as logical so but this was a particularly natural way of choosing the graph to correspond to ad s space well well it has Farrar and for our discussion it was important that when we when we constructed this this isometric embedding the way I described it when I went too far I guess when we started in the middle and then composed isometry z-- to filled up big isometry in each step I needed to have a number of incoming legs which was at most three and so for each one of the Pentagon's including the dangling index in the bulk I had at most two indices which were contracted with Pentagon's which were deeper in the ball and so the tensor contraction defied defines a composition of maps each of which take two or three qubits to either four or three and so each one is either a unitary or an embedding and the fact that the graph had that structure was well it would not have had that structure if there had been a bubble of positive curvature somewhere in the bulk a smaller portion of the boundary is that does that have any implications on their robustness grass but that's sort of in a different way so yeah I guess your your noting that you know we don't have a you know a complete translational symmetry or if I rotate the graph that I can do there are discrete rotations that I have to make you know because there's a big difference between this gap here and these guys being close together the graph doesn't look like it's as symmetrical as one would as one might like is that what you meant and this this causal slice and looked like that the information on when cubed is written on this the smaller portion with the oh I see what you're saying yeah so you're saying that if I consider say a bulk degree of freedom which is close to the boundary it is not as well protected against a ratio as bulk variables that are deep inside and that that's correct so that's in fact why in my discussion of the entanglement wedge I emphasize reconstruction of the operators that are very deep inside the ones that are deep inside are very well protected the ones that are very close to the boundary are much easier to damage because you need to erase only a smaller hunk of the boundary in an ATS space-time where where's the well I didn't necessarily mean that the eraser should be regarded as a physical process I was thinking about erasure as a way of keeping track of the ways in which bulk operators could be reconstructed on the boundary so when I spoke of a ratio I meant I can consider some set of on erase qubits and ask whether a certain bulk operator can be reconstructed with support only on those on erased qubits and if it can be then you know I say that that bulk operator corresponds to an operator that is supported only on that on the race set
Info
Channel: YaleUniversity
Views: 15,169
Rating: 4.9379845 out of 5
Keywords: Yale, physics, quantum information, quantum gravity, quantum entanglement, seminar, John Preskill, Yale center for astronomy and astrophysics
Id: Bt7RVwIFIaY
Channel Id: undefined
Length: 71min 31sec (4291 seconds)
Published: Tue May 17 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.