If we ever want to simulate a universe, we should probably start by learning to simulate even a single atomic nucleus. But it’s taken
some of the most incredible ingenuity of the past half-century to figure
even that out. So today I get to teach you how to simulate
a very very small universe. Physics has been insanely successful at
finding the underlying rules by which the universe operates. It helps that a lot
of those rules seem to be mathematical. By writing down the laws of physics as
equations, we can make predictions about how the universe should behave - that lets
us test our theories and basically become wizards capable of predicting the future
and manipulating the foundations of reality. But our wizarding only works
if we can solve the equations, and it's impossible to calculate perfectly the evolution of all but the most simple systems. That’s especially true when we study the
quantum world, where the information density is obscenely high. As we saw in a previous
episode, it takes as many bits as there are particles in the universe to store all the information in the wavefunction of a single large molecule. We also talked about the hack that lets us do it anyway - density functional theory. DFT is good for simulating the electrons in an atom. But the behavior of electrons is comparatively baby stuff compared to the atomic nucleus. Every proton and neutron is composed of 3 quarks stuck together by gluons.
Well, actually, that’s a simplification. Every nucleon is a roiling, shifting swarm
of virtual quarks and gluons that just LOOKS like three quarks from the outside.
The messy interactions of quarks via gluons is described by quantum chromodynamics,
or QCD, in the same way that quantum electrodynamics describes the interactions of electrons and any other charged particle via photons. We’re going to come back
to a full description of QCD very soon, but you don’t need it for this video. Today we just need to understand why it’s complicated. Instead of the one type of charge in QED, in QCD there are three - which we call colour charges, hence the chromo in chromodynamics. Also quarks never appear on their own - they’re always bound to other quarks in composite particles called hadrons, of which protons and neutrons are an example. To test QED we can chuck a photon at an electron and see what happens. But to test QCD, we can’t just poke a quark
with a gluon. Instead we need to figure out what the theory predicts about properties of hadrons that are actually measureable. And that’s near impossible because of the third problem. The force mediated by gluons is very strong - earning it the name the strong force. And that
strength turns the interior of a hadron into a maelstrom of activity which can’t possibly be calculated on a blackboard, and at first glance looks impossible to simulate on any computer we could ever build. Or would be if it weren’t for the fact that people are exceptionally
clever, and came up with lattice simulations. Now before we do the hard stuff, let’s review the comparatively “easy” quantum electrodynamics. Say we want to predict what happens when two electrons are shot towards each other. We can actually calculate the almost exact probability of them bouncing apart with a given speed and angle. We do that by adding up all the possible ways that interaction could happen. For example there are various ways the first electron could emit a photon which is absorbed by the second, or vice versa. Or it could happen via two electrons or more, or one of those photons could spontaneously form an electron-positron pair before becoming a photon again, and so on. Each family of interaction types is represented by a Feynman diagram, and quantum electrodynamics gives us a recipe book for adding up the
probabilities. We talked about this stuff in some previous videos. But there are literally infinite ways this interaction could happen, each more complex than the last. So how do you know when to stop adding new diagrams? We kind of got lucky with that in the case of QED. As the interactions get more complicated, their probabilities get smaller and smaller.
Diagrams with more than a few twists and turns add almost nothing to the probability, so we only have to include the simplest few levels. Each pair of vertices on a Feynman diagram represents the probability of a pair of electrons interacting with the electromagnetic
field - emitting and absorbing a virtual photon. And there’s a set probability of
that happening each time - it’s around 1/137. So every time you add another pair of vertices to a Feynman diagram, the interaction it represents becomes 137 times less likely. A diagram with only 6 vertices is nearly 20,000 times less likely than the simplest 2-vertex diagram . So we just choose the precision we want for our calculation and ignore any interactions that nudge the probability by less than that precision. This 1/137 thing is the fine structure constant. It’s the coupling strength between the electron and electromagnetic field. The smallness
of the fine structure constant means the electromagnetic interaction is relatively
weak. Relative to the strong force anyway. OK, back to the strong force. Now we have two quarks hurtling towards each other. They’re going to interact by the strong
force, which is mediated via virtual gluon of the gluon field rather than virtual
photons of the electromagnetic field. We can draw Feynman diagrams of these
interactions, now with curly gluon lines. Presumably then to calculate the probability of a given interaction we again just add up diagrams, with the probability of each diagram
determined by the number of vertices. The coupling strength for the strong nuclear force is ingeniously named the strong coupling constant. It is much higher than the fine structure constant, of order 1, though it depends on energy scale. That’s what makes the strong force strong, and it’s also what makes strong force interactions very difficult to calculate. You no longer have the luxury of throwing away all but the simplest Feynman diagrams when you’re calculating the interaction probability for your pair of quarks. Adding new vertices doesn’t decrease the probability anywhere near as quickly as with QED. This means you can’t possibly do
this calculation with pen and paper. In fact it’s extremely challenging to
do QCD with the Feynman diagram approach even with computers. Now Before any particle physicists start shouting at me, I’ll quickly add the caveat that there are some unusual cases where the strong couple constant can become small and quarks can be understood with Feynman diagrams, this is the phenomenon known as asymptotic freedom. But if I tried to explain that too we would be here all day. In general, if we can’t calculate what quantum chromodynamics predicts for the behavior of quarks, how can we even test the theory?
Well, we need to abandon Feynman diagrams. In fact, we need to abandon the idea of particles altogether. See, it turns out that the virtual particles that we were using to calculate particle interactions don’t actually exist. We’ve talked about that fact previously. Real particles
are sustained oscillations in a quantum field that have real energy and consistent properties. Virtual particles are just a handy calculation tool - a way of representing something deeper. They represent the transient disturbances in quantum fields due to the presence of real
particles that couple to those fields. The coupling
between quark and gluon fields is so intense that the disturbances of those fields are way too tumultuous to be easily approximated by virtual particles. Instead we have to try to model the field more directly. And that’s where lattice QCD comes in. It’s an effort to model how the quantum fields themselves evolve over the course of a strong force interaction. Similar to how Feynman diagrams work, to do this you need to account for all possible paths between the starting and final field configuration to
get the probability of that transition happening. Now, there’s a good reason we don’t do this for electromagnetism: there’s an astronomical number of configurations that the field could pass through in the intervening time. No supercomputer could do that even given the entire life of
the universe. For QED, Feynman diagrams let us reduce the number of field configurations by approximating them as virtual particles. But for QCD we have to stick with fields, so we need a different hack. In fact we need a few of them. Let’s make sure we understand exactly what we’re trying to do here. We want the probability for some wiggly quantum field wiggles between two states. Let’s go back to electromagnetism just for a second because there’s an analogous case there, and one that we already covered. Before Richard Feyman came up with his famous diagrams, he devised a way to calculate quantum probabilities called the Feynman path integral. It calculates the probability that a particle will move from one location to another by adding up the probabilities of all possible paths between those points. Actually, it also includes the impossible paths, but no time to explain that now. Every time you add the probability
for a single Feynman diagram you’re actually adding infinite possible
trajectories using Feynman path integrals. In lattice QCD, we want to do something like a Feynman path integral, but instead of trajectories through physical space we add up trajectories through the space of field configurations. And that is much harder to do, for three reasons. First, any patch of spacetime technically contains an infinite number of points and
no computer can hold an infinite amount of memory. So we need to pixelate spacetime so there’s only a finite number of points. But even then, there’s still an astronomical
number of ways that the field can move from the starting to final configuration.
And because these configurations are messy, we can’t do a simple integration like in
the Feynman path integral. So our second trick is called Monte Carlo sampling. This
is an extremely common computational method in which you do your calculation based on
randomized selections from some distribution. So we randomly choose a selection of
field configurations of a pixelated space that get us from the start to
the end of our interaction. But these can’t be totally random because some of these paths are still more likely than others. In the Feynman path integral, the probability of each path comes from adding up all the little shifts in the particle phase from each step. Then at the end of the path, you add together the phases of all paths to get a probability. This is exactly like the famous double slit experiment, where the probability of a particle landing
at a certain point on the screen depends on whether the phases of different paths through both screens add together or cancel out. Our quantum field is a 3-D pixelated lattice that evolves through time. As with the path integral, each time step results a complex-valued
phase shift at each spatial point. Those complex numbers are fine in the Feynman path integral because they get squared and disappear. But they’re very difficult to deal with in Monte Carlo approaches. So we need one more hack- our most ingenious yet. We’re going to pretend that time is just another dimension of space. This operation is called the Wick rotation and it eliminates the complex nature of the phase shifts. If we also “pixelate” the time dimension then we have a lattice with 4 spatial dimensions. Now our couple quark-gluon field looks like this: a lattice of points with connections. The points are the quark field and
the connections are the gluon field. Getting rid of the quantum probabilities means this isn’t really even a quantum problem any more. The structure looks like a crystal - admittedly a 4-dimensional one, but a classical crystal. And guess what - we understand
how crystals work extremely well. We can now simulate how this lattice evolves over time using the laws of statistical mechanics. We’re not all the way there yet.
After all, spacetime isn’t really a discrete lattice of points. But it turns
out that the things you want to calculate, like the mass of a hadron, DO depend on your choice of pixel size. But they depend on it a very simple way. You can run your simulation multiple times for different lattice spacings to figure out that relationship. For example, the prediction for neutron mass gets larger with increasing lattice spacing, but if you draw a trend line
through your simulation results you can find the neutron mass in the case of zero spacing - a continuous spacetime. And guess what - this gives us the correct neutron mass. This trick of transforming quantum
fields into a lattice was first discovered by Ken Wilson all the way back in 1974, back when you’d be lucky to fit a kilobyte of RAM on a computer. Since then, and with the help of vastly improved computational resources, lattice QCD has accurately predicted many things, from masses and decay frequencies of hadrons to the exotic properties of quark-gluon plasma. These simulations were also an essential part of the prediction side of the new muon g-2 results. The fact that lattice QCD even works gives us deep insights into the nature of the quantum fields. For one thing, because it doesn’t use virtual particles at all, but rather simulates the quantum field more directly. That helps us put to bed the idea that virtual particles are more than an approximation of what these messy fields are really doing during an interaction. So there is your whirlwind introduction to lattice QCD. As our computing power grows, one day we’ll likely be able to build detailed simulations of entire collections of hadrons like the nucleus of a single atom. We will never simulate
a whole universe this way - nor any way in all likelihood. But we’re going to learn so much just simulating such tiny patches of spacetime. Before we get to comments, I just want to
thank everyone who supports us on Patreon. It is incredibly helpful and greatly appreciated. If you too would like to join us on
Patreon there’s a link in the description. And no matter what level you join at you’ll get access to our hopping discord as well as access to the script for every single episode! Okay on to comments for the last two episodes. There’s the one on superdeterminism and the one where we talked about what a state of matter really is. I, booba points out an error in our definition of “absolute zero” temperature. We said that it is defined as zero kinetic energy in the
particles, but actually this doesn’t account for particle vibrations, which is also a place that thermal energy can live. Better to define absolute zero means all particles are in their quantum mechanical ground states - important, because even in these ground states there is still some energy - ground-state vibrational modes for example. Thanks for the correction I, booba. Elad Lerner tells us about another example of different states of matter appearing on different hierarchical scales, and that example is plate tectonics. The Earth’s upper mantle is mostly solid on scales less than around a kilometer. But on larger scales, it’s hot enough to flow like a liquid and move around entire tectonic plates. So in a sense it’s both a solid and a liquid - a state called plastic in the earth science. Thanks for dropping the wisdom, Elad. OK, moving on to superdeterminism.
I would compliment you for asking so many excellent questions, but I guess you didn’t really have a choice in the matter. David Dunmore proposes a non-quantum
version of the EPR paradox experiment: Take a pair of gloves, put them in separate sealed boxes and give one to a friend randomly. When you look in your box and see that it’s the left glove you instantly know that your friend has right glove. David asks whether this is a valid comparison to the entanglement experiment, done with a pair of electrons with undefined but opposite spins to each other. In the case of the gloves, their states were unknown, but still defined. In the case of the electrons, they aren’t simply unknown - they are undefined or at least each in a superposition of both up and down at the same time - and that means the actual direction gets randomly chosen at the moment of measurement. Now if the spin direction could only be up or down, then it might be impossible to tell the difference - whether the randomization happens when you put the electrons or gloves in a box, or if it happens when you open the box. But electron spin also has a directional axis - e.g. up-down, left-right, forward-back - and that gets defined by your choice of experimental axis at the moment of measurement. You’ll always measure fully-up or fully-down, no matter what direction the spin was initial prepared in. Its entangled partner then has to have opposite spin, but that opposite spin is going to be along along the same axis that it’s partner was measured in. And it’s the correlation between the spins
measured for one electron and the choice of measurement axis used on its entangled partner that is the subject of the Bell test and the source of all this spookiness. And yes, it seems that the spin direction is really set at the instant of measurement. Unless of course the our choices of measurement direction are enforced, which is what superdeterminism is saying. thepurpleberry asks whether, under
superdeterminism, we could determine the spin direction of an electron without
measuring it, only confirming it afterwards. Actually, entanglement lets us do that
without even resorting to superdeterminism. Like in the glove in a box analogy, if you can measure the spin of an electron’s entangled partner then you know what its spin is. However that measurement isn't a passive act - you will have actually forced the electron’s spin
to be in the direction of the measurement. CrazyDontMeanWrong says that the
fun thing about determinism, is that it doesn't matter. Either we have
free will or we don't and never did, yet we still act as though we
do, because what else can we do? Whether or not the universe is deterministic, there's still only one question that matters, "How can I abuse the rules of reality to
survive past the death of the stars and live to see the release of half life 3?” The worst part is that in a super deterministic universe, it’s already defined that you will wait for the heat death and that it’ll have been for nothing. But at least there’s nothing you
can do about it, so why worry?