[ intro ] If you take all the humans who have ever lived, then, all told, members of our species have probably witnessed around a quadrillion
sunrises -- give or take. That’s a quadrillion tests of the hypothesis
that the sun rises in the morning. Today’s humans use what’s called the Standard Model of particle physics to predict just about everything that happens
in the subatomic world. And, coincidentally, it also was tested about
a quadrillion times. … At one single experiment: The Large Hadron Collider. … And in one single year: 2016. We’ve tested it plenty of other times, in plenty of other places. Which means that, in some sense, we have more evidence for the predictions
of the Standard Model than for the prediction that sunrise will
happen tomorrow. That is what it means for an idea to be well-tested
in physics. But proving something right isn’t just about
quantity. It’s also about quality. And over the years, scientists have made measurements proving that we understand ridiculously well how the
universe works. [1. Time] If a GPS’s clock is off by a millionth of
a second, it will think you’re a few hundred meters
away from where you actually are. And that’s no way to get around. So clocks in your phone and elsewhere are
based on ones that measure very rapid shifts, or oscillations, in electrons within atoms of cesium. Those electrons oscillate at a reliable rate: After this many oscillations, we say a single second has passed. While tiny counting uncertainties mean that
cesium clocks aren’t perfect, the best ones will take about 300 million
years to be off by as much as a second. For comparison, the best mechanical watches in the world gain
or lose a second after a day or two. But we can do even better. In strontium, electrons oscillate about fifty
thousand times faster than in cesium. If you can keep track of the darn things, you can use them to make an even more accurate
clock. The technology for measuring such quick changes is only a couple decades old, and it’s still
being perfected. But in 2018, a team was able to watch strontium
atoms so closely that their clock wouldn’t gain or lose a
single second in over a hundred billion years -- in the neighborhood of ten times the age of
the universe. They did it by cooling about ten thousand
strontium atoms down to just fifteen nanokelvins -- fifteen billionths of a degree Celsius above
the coldest possible temperature. When it’s that cold, atoms don’t get in each others’ way as
much, which allowed the team to more easily zero
in and count those bounces more clearly than ever before. The clock is so accurate that they’re not
just thinking about using it to keep us all in sync. But it is useful for a reason General relativity
is the name of our modern theory of gravity, and one of its weirdest features is that time
itself should tick at slightly different rates at different elevations
above Earth’s surface. We’ve measured this effect in satellites
-- GPS wouldn’t work if we didn’t account
for it -- but we’ve never had clocks that were precise
enough to check general relativity’s odd temporal effects down here on the surface. Once they become more portable, though, these clocks might be our way of doing
it. Scientists don’t expect to see anything
too shocking from these new tests, though. Because general relativity has passed some
pretty incredible tests of its own. When physicists talk about something’s mass, they’re really talking about two very slightly
different things. First, there’s its inertial mass. It measures how hard something is to get moving: The more inertial mass, the harder it is to accelerate an object. That’s the one you’re technically measuring
when you use a balance. Then there’s gravitational mass. that measures how much something interacts
with the force of gravity -- so it’s like a sort of gravitational
“charge”. Electrically charged objects respond more
to electric fields than uncharged ones. So if gravitational mass is akin to charge,
objects with more gravitational charge -- that is, mass -- feel the gravitational force
more. Viewed this way, there’s no reason inertial
and gravitational mass should have anything to do with each other. One is a kind of charge; the other is how much stuff there is. But every time we use an object’s inertial
mass to predict how it interacts with gravity, we get the right answer anyway. The two seem exactly equivalent. The classic test is to see if everything falls
at the same rate regardless of its mass. If they didn’t, things with more inertial
than gravitational mass would fall more sluggishly -- and vice versa. These tests go all the way back to Galileo supposedly dropping two cannonballs
off the Leaning Tower of Pisa, and all the way forward to astronauts actually
dropping a hammer and a feather on the Moon. These and other tests established the equivalence
of inertial and gravitational mass so thoroughly that in general relativity -- that modern
theory of gravity I mentioned earlier -- they can’t be different from each other. General relativity is the best explanation
of gravity that we have, and it completely breaks if inertial and gravitational masses
aren’t equal. Enter the MicroSCOPE satellite, which held
two cylinders that were the same size but different inertial masses. The cylinders floated freely inside the satellite, which orbited Earth 120 times and measured
how Earth’s gravity tugged on both of them during the trip. According to the MicroSCOPE results, if inertial and gravitational mass aren’t
equal, the difference between them has to be incredibly
tiny: About one part in a hundred trillion. For comparison, that’s the equivalent of
measuring the distance to the Moon to within the width of a single red blood cell. That number is all the more remarkable because
gravity is actually really weak by the standards of fundamental forces. So measuring its detailed effects requires
a lot of effort. And MicroSCOPE and other experiments are part
of why astronomers can be so confident that they understand gravity -- even when it makes us think there’s weird
stuff like dark matter out there. Compared to gravity, measuring electromagnetic
effects is a snap. Which has helped us find the Rydberg Constant
-- one of the best-verified numbers in all of
science. It lets you predict an atom’s spectrum: The colors of light that can come out when
its electrons have a little extra energy. If you can see an object's spectrum, you can
tell what elements it contains. Scientists use spectra all over the place: doctors use them to measure lead in people’s
bodies; astronomers use them to discover what stars
are made of; and they’re everywhere in between. This light show happens when the electrons
around the atoms lose a bit of energy. That energy has to be shed in an incredibly
specific quantity, which takes the form of a photon of light. And that photon will have a wavelength that
corresponds to its energy. Which is a fancy way of saying it’ll be
a specific color. But to predict these things, we need a constant for the math to work out. If you can measure the energy of the light
that’s emitted, and you know the extra energy the electrons
had in the first place, then you can reverse engineer yourself the
Rydberg Constant. Except that of course it’s not quite that
simple. Electrons get in their own way, stretching
out and altering the light that they emit. So you can’t just measure the light from
a single atom, or even a single kind of atom. To measure the Rydberg Constant, scientists have to study three different kinds
of small atoms: Regular hydrogen; helium; and deuterium, which is hydrogen with an extra neutron. Scientists give the atoms a little extra energy, split the light that comes back out into its
constituent colors, and use those colors to measure the Rydberg
Constant. And the number they get looks like this, where those last two numbers in parentheses are how much the very last digits could be
wrong. That quantity is technically called the uncertainty. And as a fraction of the overall number, it’s telling us that we know the Rydberg
Constant with as much error as we’d know the distance
from your eye to the Moon if we had to worry about blinking. Because the thickness of your eyelid changes
that distance by about ten times more than the uncertainty
we have in the Rydberg constant. Yes, that’s thicker than a red blood cell
-- but in a way, this number is actually more impressive than
knowing two masses are the same. It tends to be easier to compare two things
-- like masses -- than to come up with a number like the Rydberg
constant out of the blue. So the fact that it’s so precise is pretty
nifty. The Rydberg Constant might be one of the most
precise measurements out there, but there’s at least one that beats it. It’s called the electron g factor, and its value is arguably the best match between a prediction and a measurement in the history of science. The g factor has to do with an electron’s
anomalous magnetic moment, which is one of those names that sounds more
complicated than it is. Electrons are the tiny negatively charged
particles in atoms that have already come up a couple times in
this video. They behave as if they’re spinning, and spinning things with electric charge make
magnetic fields -- that’s where the “magnetic” part comes
from. And “moment” is the word physicists use
to describe how strong a magnetic field is. Putting that all together, the electron’s magnetic moment is the strength
of its magnetic field. And it’s anomalous because it’s weird. It’s not exactly what you’d expect if
you imagine the electron as a tiny spinning ball of charge, because electrons aren’t little spheres
and they also interact with the empty space around them. Hence: The electron’s anomalous magnetic
moment. The g factor is a measure of just how anomalous
it is. The great thing about the g factor is that,
like the Rydberg Constant, it’s fairly straightforward to measure it
in an experiment. But it’s also possible to directly predict
what it should be based on parts of the Standard Model of particle
physics. So it’s another place where we can directly
check if our theories match reality. And with the g factor, they don’t just match. They really match. The g factor gets measured by using an outside magnetic field to split
up electrons whose own magnetic fields point in different directions. There are a bunch of different ways this is
done in practice, but altogether they’ve given us a measured
g factor that looks like this -- where, again, the parentheses are the amount
the last couple digits could be wrong. And by calculating based on the Standard Model, scientists get a number that looks like this. The precision of that measurement is like knowing the distance to Mars to within
the length of a couple thumbtacks. And it’s part of what people mean when they
say that the Standard Model is one of the best-verified
ideas in human history. Better verified than knowing the sun will
come up tomorrow! In chemistry, we learn that if an atom has the same number
of positively charged protons and negatively charged electrons, it’s electrically “neutral”: From far away, it looks like there’s no
charge there at all. But that’s only true if protons and electrons
have exactly opposite charges: Protons are plus one; electrons are minus
one. There are good reasons to think this is true: If it weren’t, even a tiny difference would
add up across the trillions and trillions of protons
and electrons in just about anything around you. We’d definitely notice like constant lightning
bolts shooting out of everything. But that was a little too hand-wavy for a
pair of physicists in the seventies, who verified that if electrons and protons
don’t have exactly opposite charges, they can only be different by less than about
one part in a billion trillion -- that’s a one with twenty zeros after it. Which is something like knowing the distance
to the Sun to within the diameter of your DNA. What they did was put a bunch of a heavy gas called sulfur hexafluoride into a container
about 20 centimeters wide. They put the gas in an electric field that
flipped back and forth. If protons and electrons didn’t exactly
cancel, the electric field would make the gas particles
start to push each other around. Flipping the field back and forth would then
make the gas start vibrating, creating sound waves that could be picked
up on microphones around the experiment. They did that, and the mics didn’t hear
anything, and that told them that electrons and protons
must have exactly matching charges -- or, at least, very close to it. Scientists don’t make these absurdly precise
measurements just to one-up each other. Ultimately, we want to understand the universe
-- especially the parts we’re clueless about
like dark matter and dark energy. They have no place in our current models,
which means those models have something wrong with them. Every one of these ultra-precise measurements
is an opportunity to find where those models fail. And every time a team finds exactly what they
expect, it gets harder to make room for something
brand-new to sneak in. Because if you know the distance to the Moon
to within a red blood cell, you can be pretty sure there’s not an elephant
standing there. Modern physicists hear thumping feet and trumpeting
trunks. But when they look closely, there’s no elephant. Not yet. Thanks for watching this episode of SciShow, writing episodes like this is not easy and We have amazing community of supporters that
allow us to do it, and if you want to join them, you can get
started at patreon.comscishow. [ outro ]
Still no cure for brain freeze though.
The measurement between the laser emitter and the surface, in that second of reading. But atmospheric condition, as well as the variable distance between the two surfaces changes dramatically. Which makes the precision of this technology pretty useless in this application.
That is not what I understood from watching that. He says that the "Microscope satellite test results, if inertial and gravitational mass are not equal, the (Amount that the Earths gravity tugs on them in orbit) difference between them has to be tiny, about 1 part in a hundred trillion." Then he says "For comparison, that's the equivalent of measuring the distance to the moon to within the width of a single red blood cell." Then he goes on to give another example saying if you measure from your eye to the moon, just blinking will change the distance due to your eyelid thickness.
Fun fact: atomic clocks have phase noise worse than most quartz crystal references. The problem is Heisenberg's Uncertainty Principle limits how well you can push frequency/time vs. phase accuracy and that means you have to trade time accuracy vs. phase accuracy. Phase noise means you can't measure moment to moment delay as accurately as you can measure some longer times.
For some measurements, this isn't a problem but for other things, you are still better off with a quartz clock instead such as radio local oscillators and computer clocks when you need to measure intervals.
Haven't watched the video since I try to stay away from Hank and his "I think so, so it must be true" bullshit, but generally speaking, how would you define the distance between two such huge objects?
It's enough for a guest of wind to blow some dust away to change the distance by more than the size of a blood cell, so any distance that's relative to anything inside the atmosphere can't be that accurate for more than a couple of seconds. And since the atmosphere is considered today to envelope the moon defining the distance by it define edge of the atmosphere would make no sense