I have good news and bad news. Bad news first: two years ago we reported on the Crisis in Cosmology. Since then, it’s only gotten worse. And actually the good news is also that the crisis in cosmology has actually gotten worse, which
means we may be onto something! The most exciting thing for any scientist is when something they thought they knew turns out to be wrong. So it’s no wonder that many cosmologists are starting to get excited by what has become known as the Hubble tension, or the crisis
in cosmology. The “crisis” is the fact that we have two extremely careful, increasingly precise measurements of how fast the universe is expanding which should agree with each other, and yet they don’t. We first reported on the growing hints of this tension 2 years ago. Back then, the most likely explanation was that new, refined measurements would bring the numbers into agreement. So far that’s not been the case. But just recently, one of these methods
received a massive refinement due to the Gaia mission and its unprecedented survey of a billion stars in the Milky Way. And guess what - the tension is now even tenser. So is it time to rethink all of cosmology? Before we can even think about that, we should probably do a refresher on what the issue actually is. So you may have heard that the universe is expanding. Space on the largest scales is stretching, throwing galaxies apart from
each other. We’ve talked about how astronomers first figured this out. Long story short
- when a distant galaxy’s light travels to us through the expanding universe it gets
stretched out - its wavelength increases. If we also know how far that light traveled - the distance to the galaxy - then we can figure out the rate at which space is expanding - at least along the path to that galaxy. Combine the redshifts and distances of many, many galaxies and you have the expansion rate of the universe, typically expressed as Hubble’s constant after Edwin Hubble, the guy who first
properly measured it back in 1929. By comparison, getting the distances is much, much trickier than getting the redshifts. It depends on a long chain of distance measurements that we call the cosmic distance ladder. First you measure distances to objects in the solar system - then use those to measure distances to nearby stars, then more distant stars, then nearby
galaxies, then distant galaxies, etc. If one of those distance measures is wrong, all the subsequent rungs of
the distance ladder are off. Hubble’s distance measurements were based on a method pioneered by Henrietta Swan Leavitt. She developed one of the first so-called standard candles. These are objects whose true brightness or luminosity can be known. Knowing the true luminosity of an object means you can figure out its distance just by observing how its
brightness has been dimmed by that distance. Swan Leavitt realized that a type
pulsating star known as a Cepheid variable has a rate of pulsation that
depends on its luminosity. Measure the pulsation rate and you know its true brightness, and so can find its distance. And if the Cepheid is in another galaxy,
you have the distance to that galaxy also. Cepheid variables are great standard candles, but they’re just stars, and are too faint to see beyond a certain distance. In the 1990s two teams of astronomers employed a new type of standard candle - the incredibly bright “type 1a” supernovae that result when a white dwarf star explodes after cannibalizing its binary
partner. Using these supernovae to get distances to galaxies halfway across the universe, they found something totally unexpected - not only is the universe expanding, but that expansion is accelerating. And so dark energy was discovered - a mysterious and ubiquitous energy that grows as the universe grows, speeding up its expansion. Dark energy very likely holds deep, deep
clues about the fundamentals of reality. With its discovery it suddenly became VERY important to perfect our measurements of the expansion rate - both to confirm dark energy’s existence and to learn of its nature. And this is where our story splits. There are, broadly, two approaches to improving that measurement. One is to double down on the old method - find more type 1a supernova and improve those distance measures. The other is to find a totally independent measurement of the expansion rate. A good reason to do that is that the
supernova method is a pretty high rung on the cosmic distance ladder - which means if any rung below it is broken, the method fails. So different teams of astronomers pursued both approaches - and this is where the crisis emerged. One alternative method for getting the expansion rate is to study the oldest light in the universe - the cosmic microwave background. This
light was released only a few hundred thousand years after the big bang, and carries with it vast information about the universe’s early state. I’ll leave you to watch our previous video on the subject to see how our map of the CMB using the Planck satellite can give us the expansion rate. The Planck team calculated a Hubble constant of 67.6 km/s/Mpc - let’s not worry about the weird units right now. But they also claim an uncertainty of about half a percent, making it the most precise measurement of the expansion rate ever made. Meanwhile, Adam Reiss, one of the
Nobel-winning discoverers of dark energy has doubled down on the supernova
method. A couple of years ago his team published a new Hubble constant
of 73 and a half.+/- 1.5 km/s/Mpc. That’s in the same ballpark, but far
enough off to raise many, many eyebrows. One possible explanation for the difference is that the nature of dark energy has changed over time. The Planck team’s Hubble constant assumes that dark energy has had a constant density for the
entire age of the universe. That’s what you expect for the simplest models of what dark energy might be, But if dark energy has HAS changed over time it could explain the discrepancy AND indicate the dark energy is even weirder than we thought. It’s hard to overstate how huge a discovery that would be. So you can see how it might be nice
to find out one way or the other if the difference between the Planck
and supernova results is real. Most people still think that there are
unknown errors that are affecting one or both. For example, the cosmological distance
ladder could have a broken rung. The supernova standard candles are calibrated based on distances from our good-ole Cepheid variables in galaxies where both are observed. But those distant Cepheids are in turn calibrated based on Cepheids in our own galaxy, for which we can get distances by a method that’s much more reliable. That method is stellar parallax - and it’s
about as direct a method as you can get, short of building a giant space ruler. Ultimately, refining the supernova distance measurements comes down to refining parallax measurements,
and that’s what we’ve finally achieved. You’re already familiar with parallax.
Place a finger in front of your eyes and blink left and right. Your finger moves
relative to the background, which I guess is me in this case. Move your finger away and the displacement is less. Closer and it increases. We can use this same trick to measure the distance to stars. As the earth orbits the sun over the course of the year, nearby stars appear to move relative to more distant stars. That’s stellar parallax, and our quest to measure it has been central to understanding our universe for hundreds of years. Prior to the invention
of the telescope, the fact that we didn’t see obvious stellar parallax was taken as evidence that the Earth is NOT orbiting the Sun. It turns out that the stars are just so
far away that you need careful observations with quite a good telescope to see
parallax in even the nearest stars. And so it was that in 1912 Henrietta Swan Leavitt used parallax measurements of Cepheids in the Milky Way to turn these stars into standard candles and so founded our distance ladder, which ultimately led to the discovery of dark energy. But this feels like a bit of a house of cards - the ladder was entirely dependent on the relatively few Cepheids that are close enough for parallax measurements. Things started to get better when we put telescopes in space - above the blurring effect of Earth’s atmosphere it’s possible to make better position measurements. The Hubble Space Telescope has done great work here, and so has the European Space Agency’s HIPPARCOS satellite, which tracked the motion of 100,000 stars in our local patch of the galaxy But to really nail down the lowest
rung of the distance ladder, we need a lot more Cepheid
parallaxes to much greater distances. And that’s what ESA’s Gaia mission has given us. Parked in an orbit just past the moon, Gaia scans the sky year after year, mapping the structure and motion of a good faction of the Milky Way galaxy. Gaia is making the most accurate catalogue yet of parallax measurements, for the nearest brightest stars, it’s 200 times more accurate than any previous measurement. Gaia has allowed us to recalibrate Cepheid variables as standard candles, which in turned enabled a recalibration of type 1a supernovae - which in turn gave Adam Reiss and team a refined measure of the Hubble constant. So what do you think - do the supernova and Planck results agree? Not in the least. The Gaia-based Hubble constant of 73.2 km/s/mpc seems to confirm the previous type-1a supernova result, now with more surety about the distance ladder it’s based on. Before we start jumping up and
down and yelling about new physics, remember that we’re level-headed scientists. Two independent methods aren’t enough. We need more - and we have some great
options that will either to break the tie between Planck and supernova,
or to confirm that the difference is real. We’ve already talked about one of these options. It’s to look for vast ring-like patterns in the way galaxies are scattered across the universe and use those rings as a sort of standard ruler. These “baryon acoustic oscillations”
are the fossils of ancient sound waves that reverberated through the hot, dense plasma of the early universe. Now those ripples are frozen into the distribution of galaxies that formed from that matter. The Baryon acoustic oscillations seem to be coming on the side of the Planck
result - a Hubble constant in the high 60s. Another extremely promising method is
gravitational lensing - the bending of light around massive objects due
to their warping of spacetime. One manifestation of this is when a distant quasar - a giant, gas-guzzling black hole - happens to be closely aligned behind a more nearby galaxy. Then, that quasar’s light travels multiple paths through this gravitational lens, resulting in multiple
images of the quasar from our point of view. Quasars are violent beasts - the maelstrom of gas fluctuates in brightness as it spirals into the black hole. And so we see lensed quasar images flicker - but they flicker out of sync. There’s a time offset due to the fact that
these different paths through the universe have slightly different lengths. By measuring the time delay in these flickering lenses, we can get a measurement of cosmic distances, and with that a measurement of the expansion rate that’s independent of the cosmic distance ladder. So far we’ve only done this with a small number of lenses and so the uncertainty is large - but published results give a Hubble constant in the low 70s - so in agreement with the supernova guys. But this game is about to take off, with upcoming giant surveys set to discover thousands of new lenses that should massively improve this measurement. And before too long we may even be able to use gravitational waves from merging black holes to measure the Hubble constant. These waves get stretched by the expanding universe, just like light does. But unlike light, they also encode information about the distance they’ve traveled, and so can be used to measure the expansion rate without the cosmic distance ladder. We’re calling these black hole
mergers 'standard sirens’, and while the error bars they give are still
large, they’ll only get smaller over time. That is where the crisis stands - it’s
increasingly clear that there’s a hole in our understanding of the universe - whether it’s a crack in the rung of the cosmic distance ladder or something more fundamental about how the universe expands. Scientists love being wrong - because when you find the source of that wrongness, it can only lead to greater understanding - in this case, of the strange forces driving
our ever-expanding spacetime. As always, I want to give a shoutout to all of our Patreon supporters - your continued support is such a huge help. If anyone would like to pitch in - even a couple of bucks a month makes a real difference and also gets you access to the Space Time discord where you can nerd-out 24-7. And today’s extra special shoutout goes to Sandy
Wu, who’s supporting us at the big bang level. Sandy, these are such strange and uncertain times - I mean seriously, we don’t even know what Hubble’s constant is. But your support grants some much needed stability to spacetime - the youtube show, not the expanding fabric of the universe - that’s still freaking everybody out. Today we’re doing comments
for the last two episodes, in which we explored the connection between gravity, light, and the flow of time. Timebucks asks how can we be traveling at the light speed in the time dimension? I'm glad you asked, because this notion gets stated without much justification. The idea that massive objects travel through time at the speed of light its
just one way to interpret the math of special relativity. In relativity, there's this thing
called the spacetime interval which describes the separation between two events in space and time. It's the minus the sum of squares of the x, y, and z spatial intervals, plus the square of time interval times the speed of light. You need the speed of light in here to give time the same dimentions as space. Your velocity through spacetime - also called your 4-velocity - is just the change in spacetime interval divided by the change in time. But if you are motionless then you have no change in the space intervals - and your spacetime interval is just c-times-delta t divided by delta t - or just c, the speed of light. That's where
the idea comes from in the math. But what does this mean physically? Does it mean anything physically? That's less clear, because we don't really know what time is. Nor space for that matter - or whether they're really the same type of thing - dimensionally speaking. We would also need to justify why the c in the spacetime interval has to be the speed of light. It's worth a full episode to explore that one, and we'll definitely get around to it.
But if you want a much better description of the math, check out Sabine Hossenfelder's episode on this - link in the description. John Smith Noted that the Huygen's Principle seems eerily similar to the double slit experiment if there were an infinite number of slits. Well, John Smith, you're in good company. There's this story of a young physics student hearing the description of the double slit experiment, in which the professor describes how you can figure out the interference pattern by thinking of circular waves originating from the slits, and calculating how these ripples add up at the screen. The student raises his hand and asks "but what if you had 3 slits" - so the professor says that you just add up the ripples of three waves. The student trolls the teacher with "what about four slits" which they reply with "obviously you add up the ripples of four waves" And then "what about 5 slits" etc.. until finally "what about infinite slits". That student was Richard Feynman, or so the story goes - and he'd just figured out Huygen-Fresnel principle all on his own - a couple of hundred years late, but independently nonetheless. So, John Smith, you're another 100 years late, but good work anyway. Tom Kerruish and clearnightsky saw the
Feynman connection also - pointing out the Huygen's principle feels like Feynman's
path integral formulation of quantum mechanics. For those who don't know, the path integral calculates the trajectory of a quantum object by adding up all possible and even impossible paths between two locations or states. The unlikely paths cancel out leaving your with a prediction of the path it'll actually take. We did an episode - search for path integral on the channel home page. But it seems that Feynman was influenced by Huygens-Fresnel, as well as by the principle of least action. So, yeah, nice work reinventing
all of physics guys. Several of you commented that this whole
gravity bending light thing might explain why stormtroopers have such terrible aim. I think you may be onto something. Think about it - these poor guys have to fight in so many
different gravitational fields - star destroyers, the death star, forest moons, ice planets - must be hard to recalibrate every time. Who knew George Lucas was such a jedi master of general relativity. But the whole series makes so much more sense when you think about it. For exame, when Greedo shot before Han in the “updated” releases, we were really just watching the same scene from the perspective of a passing star destroyer in hyperspace - it’s superluminal motion reversed the apparent causal ordering. In other words, shot first, Han did.
Very interesting — thanks for posting.
“Scientists love being wrong.” This handily sums up the difference between science and religion.
is the video a little glitchy at random times for anyone else?
Haven't had a chance to watch it yet, so I see the title and my brain just went "oh boy, what did we break now?"