How Quantum Entanglement Creates Entropy

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Quantum entanglement is not real, neither all the other weird stuff and negative energy. There are 12-directions of spin, then their experiment setup determines the result, not the experiments themselves, it is really silly and hilarious.

👍︎︎ 1 👤︎︎ u/ThomasTwin 📅︎︎ Sep 18 2021 🗫︎ replies
Captions
Entropy is surely one of the most perplexing concepts in physics. It’s variously described   as a measure of a system’s disorder - or as the amount of useful work that you can get from   it - or as the information hidden by the system. Despite the seeming ambiguity in its definition,   many physicists hold entropy to be behind  one of the most fundamental laws of physics. The great astrophysicist Arthur Eddington  once said, “The law that entropy always   increases holds, I think, the supreme  position among the laws of Nature...   if your theory is found to be against the Second Law of Thermodynamics I can give you no hope;   there is nothing for it to collapse in deepest humiliation.” Eddington wasn’t the only one   to see the 2nd law as fundamental. As we’ve discussed in previous episodes, the 2nd law of   thermodynamics may be responsible for the arrow of time and is a key ingredient in solving the   black hole information paradox—a solution that may one day unite quantum physics with gravity. But despite these clues to the importance of entropy and the 2nd law, it’s not obvious what   makes it so fundamental. A system’s entropy is what we call an emergent property, and the   2nd law seems to be an emergent law. Emergent properties and laws arise from the statistical   behavior of large numbers of particles. For example, a room full of air has a temperature,   which is a measure of the average energy of motion of all the individual air molecules. But a single   air molecule doesn’t have a temperature - at least not in the same way. Instead, that molecule has a   velocity and a mass and so on, which define how it bounces off the walls or other particles,   giving rise to what we perceive as temperature, and giving rise to the laws of thermodynamics. So we tend to think of emergent properties and laws as less fundamental than the properties and   laws governing individual particles. Entropy IS a thermodynamic property, and the 2nd law   is the … well, second law of thermodynamics, so is also statistical in nature. Why, when are these   things considered so fundamental? To answer that we need to understand what these things emerge   from. What underlying property of nature do all these different definitions of entropy describe? As is often the case, getting more fundamental means getting quantum - and there is indeed a   type of entropy that applies to quantum systems like our air molecule - it’s von Neumann entropy,   and understanding it may help us understand not just the 2nd law, but also the arrow of time   and how the large scale world emerges from the quantum world in the first place. Before we dive into that, let’s review  the more familiar definitions of entropy.   To really understand these we have a full  playlist on the mysteries of what I’ll call   classical entropy and its relationship  to information. But let’s run through them. Rudolf Clausius came up with the first  definition of entropy as essentially the   amount of useful work that could be extracted by moving heat energy around. If heat energy   is perfectly mixed inside and out of an engine then no work can be extracted, while if heat   energy is separated - hotter inside the piston chamber - then it will tend to become mixed, and   we can harness that flow. Ludwig Boltzmann recast entropy in terms of the number of configurations   of particles that give the same set of crude thermodynamic properties. For example, there are   more configurations of particles in which energy is perfectly mixed than if energy is concentrated   in one spot - like in our piston chamber. Systems will tend towards the more common configurations. Hence, entropy increases. Here we can start to see the connection between entropy and information.   If all the air is in a corner of the room for example, then you know more about the positions of the   individual particles - they’re all in the corner or the room - versus if they’re spread through the room. But it took the invention of information theory to really see the connection between information and entropy. It was Claude Shannon   who founded the field of Information theory, and also invented the entropy of information - Shannon   entropy. Shannon entropy can be thought of as the amount of hidden information in a system - or more   precisely the amount of information we can hope to gain by making a measurement on the system.   If all the particles are bunched up in the corner then measuring their exact positions gets you   some information. But if you  measure their positions when   they’re spread through the room you  increase your information by a lot more. Another way to think about Shannon entropy is in terms of events. The more possible   outcomes, the more entropy the event has. For example, a flipped coin is a low-Shannon-entropy   event because it only has two outcomes, while flipping a million coins is a high entropy event.   Shannon entropy is actually more  fundamental than thermodynamic entropy   in that it is the generalization  of the more familiar entropy.   It applies to any system of information and any type of action that reveals that information. When he first came up with this theory, though, Shannon didn’t fully realize its importance.   As the (perhaps apocryphal) origin story goes, he only started calling his invention “entropy”   after talking to the great Hungarian mathematician and physicist John von Neumann. Supposedly, Von   Neumann said that he should call it entropy for 2 reasons: 1. it looks exactly like the equation   for thermodynamic entropy, and 2. nobody knows what entropy is, so nobody would argue with him. But von Neumann probably knew perfectly  well that Shannon’s entropy was the real   deal. One because von Neumann was a savant level genius who had out-math’d and out-physic’ed   many of the greatest minds of the last century;   and two because, of part one, von Neumann had already invented his own brand of entropy. Von   Neumann entropy. Its the entropy of quantum systems, and because everything else is   made of quantum systems, it may be the most fundamental definition of entropy, Even Shannon   entropy is a just a special case of von Neumann entropy - at least so says von Neumann himself. The concept of von Neumann entropy is at least incredibly powerful. It’s at the heart of quantum information theory,   enabling us to calculate how much quantum information is contained in a system,   and it can also be used to determine how many bits of classical information we can get out of   the system when we make a measurement. But perhaps the most interesting - von   Neumann entropy tells us the amount of  entanglement in the system. In fact it’s   driven by entanglement - this mysterious connection between quantum particles that Einstein called   “spooky action at a distance”. As a bit of a  spoiler - von Neumann entropy seems to reveal the   evolution of entanglement connections is  what drives the 2nd law of thermodynamics. Ok, we’re getting ahead of ourselves. To  get a glimmer of understanding of what   von Neumann entropy is about, let’s think  about information in quantum mechanics.   Quantum systems are described by what we call the wavefunction - that’s the distribution of   probabilities of all the possible properties the system could have if you tried to measure it. As an example, imagine you have a quantum coin. It has a wavefunction that just   describes which side is up - heads or tails.  So you flip the quantum coin it enters what   we call a superposition of states -  it is simultaneously heads AND tails   until you look at the result, at which  point it becomes either heads or tails. By the way, the quantum coin is just like  the both alive-and-dead Schrodinger’s cat,   and while these are just illustrative examples,   there are many real quantum systems that can exhibit these superposition states - like a   particle simultaneously having spin up and spin down as revealed in the Stern-Gerlach experiment,   or a simultaneously passing through two slits in the double-slit experiment. And we’ve   talked before about how these experiments verify these overlapping split realities. Here’s a counter-intuitive thing about  superposition: after you flip the quantum coin,   you actually DO know its current unrevealed  state. That’s because its state is entirely   defined by its superposition wavefunction  - it is in a pure state of 50% heads 50%   tails. This is subtly different to the state of a classical coin being either heads OR tails, with a 50% chance of   each after you look at it. That superposition really represents the coin’s current reality   from your perspective. But if you have full  knowledge of the unrevealed coin’s current state,   then there is no hidden information - which means its entropy - its von Neumann entropy - is zero.   Observing the coin doesn’t reveal new  information about the unrevealed state.   Rather it changes the quantum state in a  random way - now 100% heads or 100% tails,   but the information about which way it would go wasn’t hidden in the unrevealed wavefunction. This is very different from the result of  flipping a regular coin, which is definitely   heads OR tails before you reveal it. That  information of IS embedded in its wavefunction,   it just isn’t known to you. So the regular coin's entropy - in this case its Shannon entropy - is positive. In case superposition wasn’t weird enough, let’s bring in quantum entanglement. That means we need   a second quantum coin. It’s  spookily connected to the first,   in that when both coins are flipped they have to land opposite to each other. After the flip   we say that the flip results are entangled.  They are correlated - even though we don’t   know the individual results. If we reveal one we’ll immediately know the result of the other. So you flip your pair of entangled  quantum coins. There are   two ways this can turn out - either the first coin lands tails and the second heads, or vice versa.   Before they’re revealed they exist in a  superposition of states - both possibilities   exist simultaneously. The unrevealed wavefunction is like this, which means 50% heads tails   and 50% tails heads. The von Neumann entropy of that entire wavefunction is still zero because   the combined wavefunction of the two coins holds all the information about their current state.   But what if we consider only a single  coin? Because of the entanglement,   the part of the wavefunction corresponding to a single coin does not contain all the information   about that coin’s state. Its von Neumann entropy is no longer zero - information IS hidden - it’s   hidden in the part of the wavefunction  corresponding to its entangled partner. And this is where we can connect von Neumann entropy to all of the other forms of entropy,   and glimpse the real origin of the second law. When viewed WITH its entangled partner, the coin exhibits quantum weirdness like superposition.   That could be revealed in experiments like a Bell test, which we covered previously. But treated   individually, each separate entangled quantum coin behaves kind of like a regular classical coin -   for example it doesn’t by itself exist in a pure superposition of states; that superposition only   appears when you include its entangled partner. It’s in what we call a mixed state - heads   or tails, not heads and tails. And, just like the regular, classical coin - it has non-zero entropy. This similarity between the entangled but  isolated quantum coin is no coincidence. Its   entanglement is the first step in the transition between the quantum and classical world.   Our capacity to observe quantum  effects like superposition   depends on being able to access the entire wavefunction. With one partner coin in entanglement,   that becomes more difficult - but as a  quantum object interacts with the countless   particles of a macroscopic environment, and those particles interact with each other,   the web of entanglement grows so quickly that it soon becomes impossible to access the entire   wavefunction. We call this process decoherence - it’s how the ordinary macroscopic world emerges   from its very weird quantum pieces - and for a deeper dive into decoherence we do have you covered. Looking at it this way, our classical coin is just like our isolated entangled coin. Except now we   don’t have a simple entanglement between two coins - the entanglement is between the coin’s countless   constituent quantum parts and every particle they’ve ever interacted with. That network of   entanglement IS in a superposition of states - heads AND tails - but you can’t ever access it.   In part because you’re part of that network - you’ve already become entangled with the coin and   live in the slice of the wavefunction - the mixed state -where the coin is either heads OR tails. The propagation of entanglement  leads to our experience of the   very un-quantum macroscopic world, but  it also drives the growth of entropy. Information about the detailed quantum states of all particles becomes increasingly inaccessible,   leaving only crude observable properties - for example, thermodynamic properties   like temperature. We call these properties that are preserved through this diffusion of entanglement pointer states   in the language of quantum darwinism. Over time, systems move towards a state of maximum   entanglement, at which point most information is hidden, and the systems are describable by   the fewest properties - for example, when  a system equalizes to a single temperature. So there you have it - the growth of  entanglement drives both the 2nd law   of thermodynamics AND the emergence of the classical world from the quantum.   And, as an extra trick, it also defines the arrow of time which itself points in the direction of   increasing entropy and multiplying entanglement, as, of course, we’ve discussed. But there’s a lot   more to talk about - because von Neumann’s insights lead us to a picture of reality   that is more informational than physical.  But that information won't stay hidden for   long - only until our next episode on  this information-theoretic space time. Today we’re doing comment responses for the last 2 episodes - there was the one on the Kessler   syndrome - the exponential buildup of space junk around the earth. And then the one about the Plank   length - the smallest meaningful length, and what that tells us about the nature of space. Jason Burbank asks for a citation for my statement that 40% of catalogued space debris comes from   exploded US rocket stages. And I’m glad you asked, because digging that up and re-reading it,   I realize the number isn’t right. The original reference is a paper by Donald Kessler back   in 1981, where he finds that 42% of the debris tracked by NORAD was from 19 US rocket stages that exploded   after releasing payloads - in some cases a  few years later. But that was 40 years ago,   and the number of tracked orbital debris has increased from around 4500 to more like 15000.   Now some of those new pieces were due to the rocket fragments, but the 42% is no longer   accurate - presumably it’s lower by a factor of more than 3, due to all the new debris   and the fact that some of the rocket-stage debris orbits would have decayed. That still leaves it at   a significant fraction, but not 40%. Anyway, the link to the Kessler paper is in the description. Regarding exponential processes in general, Frederf reminds us that there’s no part of   the exponential curve that suddenly switches from slow growth to fast growth, objectively.   Exactly right. The location of the apparent kink in the curve where it seems to go rapidly vertical   depends on the axis choice. If you zoom in on that kink you’ll see - what? An exponential curve. Zoom   in again - exponential curve. The exponential is scale-invariant and the location of the spot where   the gradient exceeds 45% depends on the range you choose for your x-axis. For an exponential   process, the rate of doubling stays constant. Of course, the rate of doubling doesn’t have to   stay constant, and that may be defined by certain thresholds on the exponential rise. For example,   there’s a threshold in the exponential increase of space debris where we can no longer operate   safely in space, which may reduce the rate  of increase. There’s a threshold in the   exponential increase in computing power - it’s where computers are designing better computers,   which may actually increase the doubling rate. And there’s a feedback between exponential processes.   For example, presumably our future AI overlords will be better at managing our orbital space,   which will further reduce the Kessler syndrome. OK, and moving on to our  episode on the Planck Length. Russ Johnson asks a good one. If space is like a 3-D grid of Planck-length scale, and movement   means flicking from one cell to the next, then A) why is movement restricted only to adjacent cells,   and B) in such quantized motion, would certain types of motion be disallowed? Some of you replied with good  answers - Daemonxblaze points out   that quantum teleportation, aka quantum tunneling,   is basically moving instantaneously  beyond your adjacent grid points. But really we can’t think of space as a simple rectilinear grid. For example, Michal Breznicky   points out that a grid like that introduce weird effects that depend on orientation. Diagonal   motion would be different from rectilinear motion. There would be preferred directions in   space. But we know for sure that this universe has a continuous rotational symmetry - there   is no preferred angle. That’s the basis for the conservation of angular momentum, which we know   is obeyed. Another issue with the simple grid is that it’s not by itself Lorentz invariant.   The distances have to depend on the velocity of the observer. Moving fast should then collapse the   grid in one direction - but that would reduce the grid width and hence the Planck length in   that direction, which is also wrong - the observed Planck length should be independent of velocity.   Michal asks whether there’s a way to quantize space that doesn’t lead to these violations.   The answer is yes - and various people  have tried. Loop quantum gravity is a   good example - but for that to work the  underlying grid is not a grid of space at   all - it’s a grid of abstract connections between points. Space emerges only on larger scales. F asks one that I wish I’d had time to  discuss in the episode. When using the   Heisenberg microscope to measure position, I said that the uncertainty in the final momentum of   the particle is roughly equal to the momentum of the photon used to measure that position.   F points out that surely we should have some idea how that momentum transfer should happen,   so we should be able to calculate the final momentum of the particle - so then where   is the uncertainty? The answer is that there’s uncertainty in how the photon hits the particle.   Dead-on? A glancing blow to the left or  right? In the former the full momentum   of the particle is away from the photon.  In the latter its lateral - which means a   range in possible momentum along each axis is roughly the momentum of the photon. Harys John points out that  the Kessler syndrome is like   zombies - getting hit by other space  debris turns you into space debris.   Which could mean there’s space debris out there pretending it hasn’t been bitten. It’s worse than that actually. It’s like  if you got bitten by a zombie you exploded   into 1000s of tiny zombies. Moving at 7.6  km/s, so these would be fast zombies a la   28 days later - formally the worst zombie type. Fortunately there’s no sound in space so you can’t   here all the gurgling and screaming that must wreath the earth at all times. Small blessings.
Info
Channel: PBS Space Time
Views: 666,548
Rating: undefined out of 5
Keywords: Space, Outer Space, Physics, Astrophysics, Quantum Mechanics, Space Physics, PBS, Space Time, Time, PBS Space Time, Matt O’Dowd, Astrobiology, Einstein, Einsteinian Physics, General Relativity, Special Relativity, Dark Energy, Dark Matter, Black Holes, The Universe, Math, Science Fiction, Calculus, Maths, Holographic Universe, Holographic Principle, Rare Earth, Anthropic Principle, Weak Anthropic Principle, Strong Anthropic Principle
Id: vgYQglmYU-8
Channel Id: undefined
Length: 19min 35sec (1175 seconds)
Published: Wed Jun 23 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.