GGI Tea Breaks' Seminars - Adam Riess (JHU) & Licia Verde (ICREA)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] okay welcome everybody i welcome you to the first focus meeting of the ggi tbrec series the focus meeting is a new format that we think it can be very useful we have two great speakers today adam recently verde they will give a 40 minutes presentation each highlighting different aspects of the same interesting and actual problem and after that we will open a discussion and we hope that everybody can come up with questions and challenging the speakers so that everybody can learn something new and hopefully we can get some ideas for future research too um so let me just uh start by introducing the the speakers uh adam reese um won the nobel prize in 2011 for his work on detecting the acceleration of the universe through a careful study of supernovae 1a standard candles and since then he has gone on and on and on by perfecting the technique of using supernovae and also the distance leader to measure local distances in the universe and by now he has a 2 determination of the hubble parameter which turns out to be quite different than the determination that is done by using cmb and large scale structure and the so-called inverse cosmic distance slider so adam will tell us about his research and let me also introduce uh lisha who will speak later uh lisa um has been part of the dumley map team she is a an expert in cmb and the determination of cosmological parameters uh from cmb and uh since then she's also worked in large scale structure and in general uh she devotes her research in developing uh accurate uh precise statistical tools uh for the analysis of cosmology so yeah i give stage to the speakers please start okay well it's very nice to be here in uh i guess i'm in rome uh it feels like my house but that's okay uh and uh i'm going to tell you about recent work uh measuring the hubble concert the expansion rate of the universe uh particularly uh some new calibrations uh based on the gaia european space agency satellite um and if you want to read about the latest results uh i reference a paper here at the bottom and this is work on behalf of the shoes team so since this is the first talk let me start at the beginning uh after a century of cosmological research and really an era of precision cosmology just in the last uh 20 years we have realized a standard model of cosmology vanilla lambda cdm which is very powerful at describing the universe it's described by six free parameters and a number of anzacs now uh while we have a lot to celebrate with this uh model that we have that describes so much uh we must not forget that 95 of the present contents of the model uh still is in dark components dark matter dark energy even dark radiation uh about who's micro physics we don't really have a good idea so we have to remain vigilant and continue to test the model and i'm gonna tell you about what is sort of the grandest end-to-end test of the model of cosmology uh and that is to make a measurement at the beginning uh and use that to make a prediction of uh the state of the universe at the end and so we start out with the vanilla lambda cdm model and this is more or less how it would have looked uh before recombination dominated by dark matter and atoms and photons and neutrinos and we use the model to predict the physical size of fluctuations in the plasma of the early universe foremost of which is the sound horizon the distance that a sound wave can travel since the big bang to recombination but also we can use the model to predict uh omega baryon the baryon density we then compare those predictions to uh the angular measure of fluctuations as observed in the cosmic microwave background or we can compare them to uh the primordial uh baryon density as indicated by deuterium abundance measurements and in that comparison between prediction and observation we essentially calibrate the six free parameters of the model we then use the model uh to describe how the expansion of the universe evolves over time and we can check some of the aspects of the prediction in particular using high redshift baryon acoustic oscillations and high richard supernovae to witness for example the accelerating phase recently finally the model evolves to look more or less like you see on the lower left where it's dominated by dark energy and dark matter uh and again using the most vanilla descriptions of dark matter and dark energy and finally we realize a prediction of the expansion rate of the universe and the gold standard of that prediction uh is from the planck facility and that's 67.4 plus or minus 0.5 kilometers per second so very precise prediction given the physics of the standard model of what the expansion rate of the universe should be today so a powerful test of this whole story including uh the physics of the dark sector is to make a comparably precise measurement of the expansion rate and so my colleagues and i started a project to do that some 15 years ago the shoes project and the idea was to use the sort of gold standard of tools for measuring the local expansion rate that is to use geometry to calibrate pulsating stars cepheid variables uh whose luminosities can exceed a hundred thousand times the luminosity of the sun and whose period corresponds very closely to their luminosity and use those to calibrate type 1a supernovae which are exploding white to our stars approaching the chandrasekhar limit now we saw a number of ways that we could improve the precision of the hubble constant over the prior generation as i'll describe throughout the talk particularly by collecting data in a way that is differential or consistent along the distance ladder but also to make our observations in the near infrared to reduce the impact of dust in the universe also uh to carefully keep track of the covariance between data and models so that we could get a realistic error budget uh at the end keeping track of the interdependencies of the parameters uh and finally i wanted to mention um this is not the hst key project which was the first iteration of measuring the hubble constant this was not a key project at all this was a series of really 17 uh separate proposals over the last 17 years using about a thousand orbits on the hubble space telescope so let me start at the beginning and just remind you what a distance ladder is and is not a distance ladder in principle is empirical and very simple as long as measurements are made consistently there is no model dependence or reference to any astrophysical theory we start more or less at the anchor step the first step where we can measure distances geometrically to cepheid variables or any other kind of star and this is generally done on scales of kiloparsecs or megaparsecs and then in the second step we can observe those same kinds of stars with the hubble space telescope in the hosts of nearby type 1a supernovae this occurs usually at distances of 10 to 40 megaparsecs and then finally we can see type 1a supernovae well out into the hubble flow where their redshifts and now calibrated distances are used to measure the value of the hubble constant uh so on a couple of important points to emphasize there's no astrophysical modeling of the objects these are treated as empirical standard candles there's no reference to general relativity or the cosmological model except for the fact that we don't make the measurements quite a rich of zero but at some finite redshift characteristically about 0.05 so very little dependence on the cosmological model however the most important thing is to make these measurements in a consistent way particularly uh when we measure the same kind of object across the distance ladder to essentially use the same facilities which is what we've uh made a great effort to do so let me start out with the first rung uh the sort of simple-minded way of thinking about this and it's often the case uh that we do is to measure the parallax it's just a motion of the earth around the sun over the course of a year uh measuring the angle through which uh the target of interest moves uh allows us to measure the distance geometrically uh the only challenge here is the stars we're interested in uh cepheid variables tend to be at least kiloparsecs away and so their parallaxes become fractions of a milliarc second even on the hubble space telescope which has uh some very small pixels that are 40 uh milliarc seconds across we would have to measure to a small fraction of a pixel this motion and so this becomes just a technically challenging problem uh but the situation has improved quite a lot in the last 10 years i'm going to tell you about one novel approach to this that that we did with the hubble space telescope about five years ago uh and that was to improve the ability to measure parallax by improving our precision with which we can centroid the position of a star so that we can measure its change in astrometry over time so the problem is in a normal staring mode image you can only centroid a star to about 100 of a pixel because of the finite number of photons you can collect so we developed a novel approach to this problem about five years ago and that was to spatially scan the telescope during the observations so that we can compare the position of a target star to a reference star over and over and essentially beat down errors in the centroiding so i literally mean that we scan the telescope during the observations so that we essentially cover more pixels and collect more information here's a tiny patch of one of those images and so this gives us great leverage in the ability to measure the positions of the stars i could spend the rest of the time on this technique but let me just sort of cut to the chase and show you after uh four years of making these measurements you measure about every six months to observe the back and forth motion uh we were able to measure the parallaxes of eight milky way cepheids at distances of a few kiloparsecs to get a mean precision and distance of about three percent um this is something we published a couple of years ago and this is what i will call approach one uh using hst spatial scanning to measure parallaxes the new gold standard i would say for measuring parallaxes comes from the isa gaia satellite which launched in 2013 and now with its data release 3 has reached and really exceeded the precision uh with which we can measure with the technique i just described but can do that all over the sky uh and so we identified 75 milky way sepians that we could observe their fluxes with the hubble space telescope to put them on the same flux scale as distant sepias but with the parallaxes measured directly with gaia and the results i'm going to talk about ultimately about the hubble constant are based on this approach published just a couple of months ago with the third data release that gets the uh mean distance uncertainty uh for the extra galactic distance is two better than one percent um combining these two methods and actually a previous third method which used the fine guidance sensor on the hubble space telescope calibrates the period luminosity relation gives us the absolute luminosities and the ability to measure absolute distances now particularly with these two new approaches using the spatial scanning on hubble and the gaia data release 3 allows us to get to the same longer periods uh that we see in the for the cepheids in the supernova hosts uh and also gets us uh measurements on the same flux scale as the distance and when gaia reaches its final data release uh it's expected to allow us to calibrate the hubble constant uh to about 0.4 percent and so there's more to go i would say in terms of what gaia can deliver but let me show you what we get from it now now parallax is not the only source of geometric calibration of sepias although it's at this point probably the most powerful another geometric technique that's also quite powerful is to measure the distance to what are called eclipsing binaries particularly in the large magellanic cloud this is when a star orbits another star and goes through eclipses and you can time the eclipses and you can measure the radial velocity of the orbiting star that allows you to measure the physical size of the host star and then you can measure the angular size through a calibration between uh the interferometric size and the color of the star and once you have an angular size and a physical size you get a distance and then a third method has been to observe the keplerian motion of water masers uh in uh orbit around a supermassive black hole in the galaxy ngc 4258 all three of these methods are quite independent and what has really changed i would say this story about measuring the hubble constant in the last five to ten years has been the great improvement in this step the anchor step uh each of these are now good to one to one and a half percent which makes them both powerful and also allows for uh cross checks uh let me move on to the second step in the distance ladder which is uh observing cepheids in the hosts of type 1a supernovae this is really i would say the rate limiting step each type 1a supernova is good to about 6 so 6 over the square root of the number that you can reach is ultimately limiting our precision in h naught um and so up till about 2016 we had observed 19 of these objects and with our new sample coming out in later this year we expect to double that sample to 38 or maybe even 40 or to realize a complete sample of all such calibratable or measurable hosts of type 1a supernova incepians within a redshift of 0.01 um now here are the period luminosity relationships for the sepias in the first 19 uh hosts of the type 20 supernovae as well as the three anchor galaxies that are used to geometrically calibrate the sepians we observe the sepians in three bands in a visual infrared and near infrared that allows us to uh calibrate and remove any reddening by dust in the host galaxies um now i mentioned in the beginning but i want to really emphasize that the way we've been able to reduce systematic uncertainties over past measurements is by measuring the fluxes of the cepheid variables with the same telescope the same instrument the same filters between the anchor galaxies where they are geometrically calibrated and the supernova hosts and we've now done this for all the geometric anchors ngc 4258 the milky way and the large match atlantic cloud as well as by observing sepians over the same range of period and metallicity and what you see here over on the right are composite light curves uh stacking all the sepias in the individual host galaxies as well as in the anchor galaxies the other way we've been able to reduce uh systematic uncertainties is by observing in the near infrared uh to reduce the effects of differential dust and actually really remove uh rending effects by measuring uh the effect on color uh this is uh best seen the the the value of observing in the near infrared is best seen in the large magellanic cloud where henrietta levitt first observed the power of cepheid variables here are the period luminosity relations as you go from the optical on the bottom to the near infrared to the near infrared that's been derided by the use of colors you can get a very tight period luminosity relationship which tells you that you're really uh reducing the effects of differential extinction uh and then the third step here we've been able to leverage really a couple of decades of work uh observing type 1a supernova well out into the hubble flow and using those to measure essentially the intercept or what is the characteristic brightness of type 1a supernovae as a function of redshift uh we don't want to do this too locally not to be sensitive to local flows and not at too high a redshift where we start to become sensitive to changes in the expansion rate and other cosmological parameters so there's kind of a sweet spot that we use between about a range of 0.02 and 0.15 so we combine all three steps and this is done in a multi-linear regression a system of simultaneous equations to keep track of the covariance between data and also between model uh or really in this case because it's a linear regression just the regression parameters but this is a visualization of what that would look like because this shows you the nested uh three steps in the distance ladder where there are really five independent sets of measurements that calibrate uh the distances to sepiates geometrically uh then the 19 and and as i said soon to be twice that number of calibrations of type 1a supernovae in just nearby galaxies and then a few hundred type 1a supernovae now out into the hubble flow and as of uh about three months ago with the new gaia dr3 data release 3 calibration uh we get a value of 73.2 plus or minus 1.3 so a total uncertainty of 1.8 percent uh which uh as you remember the results from earlier in the talk sits at quite a bit of tension uh with the value predicted from planck about 4.2 sigma so let's delve into uh this tension and the robustness or uh credibility of it really and so let's start with the first rung we can break out the value of the hubble constant you would get from the different geometric sources seven of those independently the three primary ones at this point that are carrying most of the leverage are really the gaia parallaxes that's the upper line here which alone would give 73. the water measures in 4258 72 and the detached eclipsing binaries which would give 74 but these are all mutually consistent at the level of the precision of those individual calibrations less precise but still very powerful cross checks come from our own spatial scanning parallaxes the earlier parallaxes from the fine guidance sensor on hubble and uh two other sources uh this is work from luisa breval um using not the gaia parallaxes of the sepias but rather of their companions either their binary companions that's shown here or in a number of cases where they live in a cluster of stars using the parallaxes of the stars of the cluster the average uh which is actually much more precise than the individual parallaxes and these are all consistent with this story to internally within about two sigma for seven different measures um now i just really want to emphasize how much improved parallax is ours between gaia dr2 which came in 2018 and dr3 this shows you the status of the cephe calibration in dr2 because this is a distance ladder one can start either locally with parallax measurements and end up with the hubble constant or you can invert the process you can say as a function of let's say choosing the value of the hubble constant what the parallax is ought to be it goes in either direction so i'm showing you that version uh varying the hubble constant on the x-axis is then the predicted parallaxes and on the y-axis is the gaia observed parallaxes um we had less cepheids in 2018 observed with hubble so this was with about 50 objects and you could certainly see the data was preferring the high value of the hubble constant but there was a fair bit of scatter and the precision was not as good as we would like this is now with data release 3 also a enlarged sample of hubble observations of sepians and now you can see particularly if you look at the residual space you know the preference is very strong this is why i say we get to a one percent calibration just jumping between these if you look at the low parallax end where the samples not change you can just see the improvement in scatter realized this is all coming from improvements uh in gaia and so gaia has become a very strong anchor now for the distance ladder um now the the size of the tension represents about a two tenths of a magnitude shift or difference between any one of these three rungs which you know is strongly ruled out by the statistics but uh it's also important to look at systematics and we've done 23 variants of the primary analysis to consider other further ranging systematics either changing the reddening law there's not much reddening because we're in the near infrared but still there's a little bit uh changing the rendering law changing uh how you fit the period luminosity relationship whether it's a single slope whether it's two slopes whether we don't include sepiates at the short period end or the long period end whether we start the hubble flow at lower redshifts whether we use different supernova-like curve fitters whether we limit the hosts of the supernovae to uh just lake type galaxies to match uh the sepia host nearby or to only locally star-forming galaxies uh we see small variations which we quantify in our systematic air budget but it doesn't have much impact on the tension overall uh now here's a list of questions that you might want to ask me later however i've been asked them a number of times so i'm going to anticipate at least these four maybe you'll have or five you'll have other ones uh later something i'm frequently asked could we live in a giant void which makes the local hubble expansion look nine percent larger and here we can look to either large-scale structure theory people have done n body simulations placing uh supernova hosts at the right redshifts but burying them in and body simulations or we can look at the type 1a supernova magnitude redshift relation itself for an indication of a giant void and we can place a very strong limit a kind of uncertainty in the hubble constant due to the variance of our location that could be about point six percent but that would be some fifteen or twenty sigma different than the tension we see could the telescope and instrument be non-linear uh causing it to look like there's a change uh in the hubble constant and we've calibrated the instrument on hubble uh and it's very linear it could uh the non-linearities could produce maybe up to 0.3 uh in the hubble constant across 15 magnitudes but this does not look like a plausible source uh could sepia crowding compromise their accuracy we account for crowding but we've done an additional independent test of that uh it turns out that uh if there was unrecognized crowding of the cepheids not only would it change the brightness of the cepheid it would uh reduce the amplitudes of the sepia light curves and in fact the amplitudes actually match uh the milky way counterparts uh and allows us to put a very strong limit on uh any additional crowding uh about a two percent that's again much smaller than uh the tension uh could there be a difference in the type one a supernova at the two ends of the distance ladder uh we've done a careful study of correlations of the residuals in the hubble diagram versus properties of their host galaxies and we can place limits of about 0.3 in uncertainty in the hubble constant there i want to tell you about a a new really interesting probe of cepheid physics if one wondered could the cepheids themselves somehow be different in nearby galaxies like uh the magi the milky way the large magellanic cloud ngc 4058 and just slightly more distant galaxies where uh we observe typewriter supernovae and here uh we can make reference to uh kind of a hundred-year-old study uh you've certainly heard of the hertzburn russell diagram and hertz sprung also recognized what was called the herzberg progression which is a gradual change in the shape of a cepheid-like curve as a function of its period that's a very good probe of the specific physics of sepia variables and so uh when we look at the extra galactic sepians uh we generally can't see the shape very well of individual ones because the signal to noise is low however we've now stacked all the extra galactic cepheids in period bins to compare them to their milky way counterparts and the hearthstone progression becomes very clear at very short periods you see the asymmetric light curve in a milky way sepia here at seven days that is characteristic of a star oscillating in the fundamental mode you also see a secondary bump that's been understood recently to be a two to one resonance between the fundamental oscillation and the second overtone oscillation and then that bump migrates in phase as you go to longer periods and so when you stack the like curves you see very much the same shape then you go to the next period bin where that secondary bump has merged with the primary bump and that's seen as uh consistent with the next period bin uh for the extra galactic sepia then going to slightly longer periods that bump moves even earlier in phase to the ascending branch of the light curve also seen in the extragalactic sepia then it finally reaches the minimum phase uh and also matches what is seen in the extra galactic acceptance then the bump is gone and we get the characteristic sawtooth like curve of a sepia variable uh which proceeds as you go to longer periods after which point the amplitude starts to decrease and this is seen as well particularly as you go to very long periods here at 50 days you can see the greatly reduced amplitude and then finally in the last panel the longest period sepias and so this whole sequence uh that as i said is a study of the the physics of the cepheids uh is really matched uh giving us no reason to uh doubt that distant sepiates are like the ones in nearby galaxies um i'm also frequently asked uh are we the only group that is seeing this and the answer is no actually the approach of using cepheids certainly goes back 100 years but it also was quite heavily used the most used in the last 20 years to measure long-range distances and i've compiled here all the studies using um cepheids to calibrate uh the hubble constant uh usually with type 1a supernovae um and so this was done back in the original key project in 2001 uh the recalibration of the key project by wendy friedman and collaborators gave 74 plus or minus two uh and then the data uh well the the cephe data uh for the key project was measured by a different team they were a different set of data they measured photometry using different instruments on hubble at different wavelengths but our own data which we uh publish all of that data has been reanalyzed many times by different groups of people people who work on blank by groups using different kinds of statistics to make the measurements uh it's even become a graduate student problem set and always yields a value between around 71 and 75 so that's very consistent with what we see just a couple of weeks ago a new team announced the arcaria project team announced that they spent two years going back not starting at the photometry like most of these groups do but actually going back to the original pixels with hubble for the shoes data our team's data and intentionally using different methods at every step to measure photometry and to do the analysis get distances which agree to about one percent uh with what we see uh and so this is a very powerful uh crosscheck as well um and then of course is the question what about uh other groups using other techniques uh here i direct you to uh a couple of recent reviews one by uh leech avery who will speak after me uh from a conference we hosted about two years ago and a more recent compilation by eleanor d valentino shown here for the higher precision measurements these are ones that are better than four percent uh and here again you see the reinforcement of the tension uh both in the early universe where there's internal consistency uh in the late universe using cepheids using tip of the red giant branch using masers directly into the hubble flow tully fisher a recent study i'd like to uh recommend that you take a look at by john blakeslee and collaborators using hubble data for the surface brightness fluctuation technique now in the near infrared uh lensing and um you can look at or even remove sort of any one of these and you end up with tension somewhere in the four to six sigma range depending on which combination uh you make to try to get an independent combination like generally it's better uh if you're going to use a type 1a supernova distance ladder to only use one of sepia's trgb or myra variables but whichever thing you leave out as i said you could end up as low as four or as high as six sigma but it's hard to avoid this of course the what i'm mostly talking about are these latest results uh from our team uh shown here but they are uh overall in good agreement with the mean at least of the late universe results uh let me also measure another recent contribution from gaia data release 3 that helps with calibrating tip of the red giant branch and i think in the future the community will be using gaia to calibrate and recalibrate many nearby star types that are used to measure distances uh the work on gaia comes from john soltis who is a graduate student working with me and stefano cosertano and that is the gaia parallaxes are now good enough to directly measure the parallax to omega centauri which is the biggest most massive globular cluster in the milky way it's also the only one with a fully intact uh red giant branch all the way up to the tip of the red giant bridge so it gives a direct calibration of the tip of the red giant branch and so you can identify some 67 000 stars from the gaia data in and around the cluster using proper motion and parallax and then you can measure the parallax from the gaia data as a function of brightness now there shouldn't be a dependence on brightness and so this is really a verification of the systematics uh being reduced now in uh gaia and what we see is a consistent uh parallax oops sorry i missed a slide there 0.191 uh which is in good agreement with now three other groups which have also measured the parallax to omega centauri that gives us somewhat fainter consistent at about one and a half sigma with the prior calibration uh which gives a hubble constant of about 72 so the guy uh calibration of tip of the red giant bridge is not helping uh in terms of the tension but rather seems to suggest uh that we remain stuck in it um now i don't know if lycha will get into this um this is the hubble constant tension is the one you may have heard the most about because i think it's the most significant but the sort of little brother little sister uh tension uh that people are also looking at uh comes from sigma eight the measure of the present clumpiness of matter which is also uh predicted from the planck data in concert with lambda cdm uh and has been coming out to be about three sigma lower as observed in the late universe than predicted in the early universe this is seen in lensing data from the kids survey uh combined with des it's also seen in the peculiar velocity data from sloan and 6df and so it's worth keeping in mind if there are uh interesting resolutions to this tension uh whether they could resolve both of these so that gets on to the question of what is causing this tension uh and my answer is that i really don't know uh maybe lychee knows that she will say in just a little bit and that would be a great relief um but i won't pretend to know uh i will mention that the possibility of new physics is tempting uh and has tempted hundreds of papers uh most of which focus on things that look the most successful at least early dark energy weird neutrinos decaying dark matter there's been a few different surveys recently uh i've directed this one by again eleanor d valentino for a study of solution space i'll just briefly say the ones that look worse generally involve lake time solutions swamp land w less than -1 decaying dark matter ones that look a little better strong neutrino interactions early dark energy evolving uh electron masses changing uh when recombination occurs the best idea i think is the one yet to be found uh because none of these look obvious none of them look head and shoulders better than others uh a nice description is also uh from this review hubble hunter's guide by knox and malaya who describe what they call the most likely or least unlikely solutions as early universe solutions that change the physics of the early universe in a way that uh increases the expansion rate of the universe before recombination it makes the universe become transparent earlier it makes the sound horizon uh smaller by five to eight percent uh and the various mechanisms that i described often this is effectively what they're doing injecting energy in the early universe or through interacting neutrinos keeping neutrinos at bay before the free stream does the same thing there are some claims in the literature that you get better fits to the cmb uh i think there are cases where you don't get worse fits but this is an area of uh a great work and interest and i don't i don't really know what the answer is um however i want to answer another question that you know you might be wondering about is uh you know if we don't understand the physics that's causing this can we really believe the measurements and i i think uh the answer is yes that this is often uh the way uh science works uh we see an observation and it sticks with us looking for a solution it might be short it might be long uh but what's important is just because we don't have an explanation now uh to sort of lose sight of that or ignore that there's been many times in the past where we have learned very important things from things that didn't quite fit uh likewise right almost a counterpart to that question is can we believe an explanation of what's going on without a deep hypothesis a story of how that is going on and here i would just want to point out to people uh that the present data uh really is getting quite good and therefore it presents quite a formidable challenge to attempts to explain this so one can sort of wave their arms and say oh it's new physics but because we have such precise hmz data the cmb is so uh precisely measured not most stories will fail miserably when confronted with the data likewise people wave their hands and say it's systematics uh often uh the the there is no story or if there's a story it won't be able to solve the many data sets uh that we have and so uh at this point we have so many independent rungs duplicate measurements uh you know short of failure of the copernican principle uh it's hard to imagine uh what would do that so i think the data has gotten very good and at this point what we really need are specific explanations um now all's not lost uh we expect more information to come in uh particularly from new facilities uh ligo desi roman rubin euclid jwst simon's s4 all of these will give some angle on this story whether they're constraining h of z uh improving the cmb data improving the local calibration of the hubble constant uh there are additional clues i mentioned uh the sigma eight tension other things sort of come and go with a couple of sigma level but worth paying attention to i think the reason i'm most optimistic about an eventual solution it's just the recognition of sort of how much ignorance still remains the lambda cdm is still largely phenomenological and so that leaves a lot of room still for creativity both in the dark sector and also ultimately with a quantum theory of gravity and so perhaps people working in those areas may help us i do know that the level of effort has uh increased tremendously since the recognition of this tension there are now ten times as many uh papers claiming to measure the hubble constant as there were at the start of the tension uh i don't mean to say that our we're probably getting there 10 times faster uh a lot of these measurements may not be you know really uh very uh precise or accurate but uh it is clear that this is really focused people's minds um and so our sort of last round of effort is coming up as i said later this year we're planning to double the sample of sepias and type 1a supernovae to improve the the calibration end as well from more cepheid observations in nc 4258 and we've been keeping careful track up of the error budget in these measurements and in this next and probably close to final round we expect to get to about 1.2 percent total uncertainty that coupled with many other experiments and many other takes on this uh i think this is a good time to stay tuned to this story so my final thoughts i think this discrepancy is real at the sort of five sigma could be four could be six sigma level uh it's hard to avoid a simple restatement of the problem is uh there are no precise measurements of the hubble constant in the late universe that are coming out really uh on the low side below uh the value from the early side that's just another way to say essentially the same thing but they are quite obviously missing which is indicative of what's going on uh at this point it would require i would say multiple catastrophic failures in uh the various uh measurements which just become less likely not impossible but just less likely um and therefore i think this is a very interesting problem unless you have a very strong you know probably more than five sigma prior on vanilla lambda cdm being just as it is or if you're willing to discard large amounts of data which you know i think is never a productive way forward um i think the most productive way forward is to uh continue to focus on the evidence uh be creative about ideas and uh hopefully we will eventually be as clever as the universe is so uh i will end there with this uh lovely graphic that appeared in particle physics symmetry magazine and uh and over to licha thank you thank you adam uh are there any very short questions for random do i have a question yes please question a comment um what happens about wendy friedman's 69 kilometers per megapixel per second measurement using the tip of the red giant brunch can that be removed oh no i certainly wouldn't remove it um i i show that whoops i seem to be losing my slides yes so that value is shown here it is the lowest of the value which sort of as i said range from about 70 to 75 um i guess two two comments i would make on that is uh there are other and as i said new calibrations from gaia which indicate that the tip of the red giant bridge may be a little fainter than that indication they also have a new paper with a couple of new measurements that has i think push their results a little bit higher um and uh you know there there are a range of values here and you know i think that uh you know that should be certainly something that's included in the mix of these but um you know it even in the mix it doesn't really get us below a significant level of tension the comment is regarding i read your paper 2019 on the void void cosmology and i have two comments um first of all by the way there was a great amount of literature on this over a period of years given the there was an excellent uh review by chris clarkson in 2012 com2 so in your use you seem to use the baseline for flrw lambda ctms baseline and i feel that more work should be done on studying this model which can solve the hypertension cause problem rather obviously because you have to include a radiation phase in the early universe this was done by chris clarkey and regular people and also you have to count for the fact that the beginning of early universes it's going to be inhomogeneous there's a great deal going on near the planet the bang time so using just one bang time is not enough you have to assume several and these these two combine uh with other issues also issues boundary conditions inside the void outside the void uh so i i can construct i can construct a model which does resolve the upper tension and fits the angular power spectrum right so i think the issue is you know if you try to solve this with a local void on what scale out to what redshift uh are you going to place the boundary of the void and why don't we see that in the hubble diagram of type 1a supernovae the hubble diagram of type 1a supernovae is well populated between a redshift of zero and one um so are you going to put it beyond a redshift of one um at which point you know again if you look at large scale structure calculations you know the cosmic variance the the randomness uh of putting our location in a random location in the universe suggests that uh variations in the local value of the hubble constant should be avoided 0.5 and if you go out to wrench of one they're going to drop down to order 0.1 percent that's just so much smaller than attention uh not to mention another problem i didn't mention with the void is you know if you don't place yourself directly in the center of the void or very close to it you will generate a very large dipole in the cmb uh that is not seen and so uh you know it's it you know it's just it doesn't look plausible uh from both the the empirical standpoint the hubble diagram type one supernova and the theory standpoint the the large-scale structure that we see in the size of fluctuations okay thanks uh david uh sorry but i think it's time to go to lycha and we keep comments and uh questions for the discussions uh offered thank you please lisa yesterday uh okay thank you let me see if uh this works okay can you see my screen no excellent okay so thank you and uh it's uh it's good to see everybody even if only virtually so i will continue on by talking instead of a ball trouble able troubles um so let me start with a quote adopted from carl young and the shoes that feeds one person pinches another and this is what summarized in in this plot by uh jose bernalillo and collaborator this is the planck measurement of double constant this is barion acoustic oscillation plus a big binocular synthesis this is the tip of the region branch from the chicago carnegie group this is the shoes measurement this is the lengthening time delay signal and this is another measurement which i will not talk about in this talk obtain by looking at cosmic clocks so what we need to keep in mind when we look at this kind of plot that there are some measurements that are highly model dependent and some measurement that are model independent so for example the measurement of planck heavily relies on assuming lambda cdm and lambda cdm is composed let's say of two part that is the early time physics which is a physics that we we believe we we know well it's linear is well understood plus all the late time physics where we also have the dark energy uh starting playing a role which is also not well understood the beo plus bbn measurements also rely both on uh assumption about their later in physics not necessarily measurement but putting bbn here means that you know how to commute compute the sound horizon and for your standard ruler and you can measure this while the local direct measurements are cosmological model independent and that's for when you find this attention like this you may start to think that you know maybe there's something to do with the assumption that led you to this so for those who are not aficionados let me go quickly through what i mean by marion acoustic oscillation or standard ruler so we're relying on the same physics that gave us the peak of the cosmic background and the great statistical power or interpreting those data these are sound waves in the early universe the propagate until matt radiation the couple and that imprints a scale for the cosmic background as the side horizon for the larger scale structure the sander is on a radiation drug but the two quantity are very closely related to each other and this is a key observable it's very useful for measuring the geometry of the universe and it's set by the early universe physics which we believe is well known but it could one could also tinker with it cosmic broken observation and early universe physics in nalanda cdm universe constrained the standard ruler lens to much much better than percent we are at the level of 0.2 percent or so and this is uh where the name of the game starts now uh uh the things start becoming interesting because uh uh up to when uh we managed to measure the community managed to measure the baryon acoustic oscillation signature in survey of large scale structure we happened around 2005 which coincidentally is the same time where the shoes project started um there were even before then the extremely successful approach of measuring standard candle which is type 1 a superbody which are very good at giving relative distances because there is a large uncertainty on the absolute magnitude of the fiducial supernovae but as relative distances uh they're really good and so typically this cosmic distance ladder build on supernovae tends to be calibrated locally on the h naught value while varying acoustic oscillation of the standard ruler it's really good that measure absolute distances if you say that you know the length of the ruler from early time physics or from observation of the cosmic background so this could also be used in a model independent way as a relative distance uh but for absolute distance you rely again on the early universe physics so what has happened uh about 10 years after the first detection of the baron acoustic oscillation is that the survey were enough to add enough statistical power so that a standard candle and standard ruler could overlap in reshift and the statistical power was becoming comparable so now one could do the inverse cosmic distance ladder so let me first explain what is the direct cosmic distance ladder the direct cosmic distant ladder is the game that has been played so far from h naught goes into the supernovae and then put that into your cosmology and then get everything about the cosmology including the sound rise but now what one can do is do the inverse cosmic distance ladder which is start from the standard ruler the standard ruler that is seen at the bao lower shift and then that get calibrated on the supernovae and then that can be extrapolated back to have a measurement of h naught now this reconstruction here can now even be done in a model independent way so there is no growth of structure here there is only background history and so one doesn't need to assume necessarily a london cdm one could do it as a free form for h of z say as blinds polynomial whatever and there's quite a lot of literature on that so this is where if one insists on using lambda cdm and standard early time physics lambda cdm or its simple variation these two ladders do not match so one can say but you know i'm gonna second uh the cosmic vagrant measurement of the sound horizon mixes together late time and early time physics because we see the cosmic microwave background through the entire evolution of the universe in the middle but it turns out that for the sound horizon this is not the case there is a way to analyze the cmb data so that it is entangled completely all the lay time information and the cool thing is that the measurement of the sound horizon doesn't change whether it's done in the traditional way in a model a dependent way or in a way that is independent of the late time physics but still one needs to assume standard early time physics if one also plays around with standard with early time physics then that's where modification of the sound horizon can really be blocked so uh the therefore once you've tried to link these two scales the h naught problem can be seen as an arrest problem so this faced around 2016 for the first time this is the original plot adam was also in this uh in this paper from 2016 so the probe of the expansion of the universe say bao and with supernova can only give you rs times h naught if you don't want to calibrate this ladder any at either end because gives you relative distances and then the cosmic background can give you a measurement of rs the local shoes measurement gives you h naught and that was the situation at that point now this game must be played over and over over the years as the data get better than you get better plots like this and so on and you can recast it in terms of arrests or in terms of each note so quoting adam himself this things that we are doing with h naught is a little bit like threading a needle from the other side of the universe right we have a measurement at the earliest time and then we have a model and then this model with this measurement gives me a prediction for a quantity that can be measured locally now at the other end of the universe and on both on both sides of this game the position on the measurement is now at the one percent level so that's uh where the needles and uh and even though we are finding mismatch of the order of four sigma or so so good ladders need the two good anchor points and here the problem is this seems like we're having problem with with anchors so first question to ask is is there a problem now until a few years ago there were still people saying no maybe there is no problem we don't know but as of this morning even georgia statue would agree that there is a problem we can't swipe this under the carpet now how much of a problem is slightly dependent on the cosmological model in a land of cdm model we know but in the moment we started deviating from a london cdm then there are some deviations that makes the issue a little bit less problematic but still we are quite a few sigma away now now the next question i think to ask once we agree that there is a problem is where is the problem and again until uh at the beginning when this first double tension came up one could invoke systematics i mean we all have skeletons in the closet every measurement has a skeleton in the closet but what has happened especially in the past three years or so is that a lot of the measurements have been repeated by independent group and a lot of the analysis have been repeated by independent group and i'm always pleasantly surprised of the degree of reproducibility that our results especially in cosmology start having data from large experiments are all made public even the software is made is made public and associated with the data and we've seen that several group have redone the plank analysis several group and read on the shoes analysis and the result dom budge so i systematic it's you know it's a chip shot and it's like hiding oneself behind a finger that's that's not gonna work so uh one possible work in hypothesis is to start dividing both observations and interpretation of the observation as early versus late because if we really believe that we have this standard ladder and the mismatch is in the two anchor point we know that one part of the ladder or at least one anchors really depends on the early time physics and the rest depends on the late time physics and the later emphasis is much less understood than the early time physics if not only for the fact that there's a lot of dark energy that start playing the game and it's also that as adam showed when you divide the measurement of the h naught in terms of which measurement depends on early time physics and which measurement are model independent or only rely on late time physics you see this dichotomy late time measurement hover around above 70 and early time measurement hover well below 70. so let's keep this in mind as a working hypothesis and keep asking what is the problem so the next question is is it in any specific data set and we may want to keep the la the standard london cdm context here in order not to change too many things at the same time it's also because the moment we agree that there is a problem with lambda cdm that's you know big enough on and by itself to be a result so if we took the early time route for a while people were putting the blame on planck and then time passed the planck analysis was redone decomposed reanalyzed reready everything and the result didn't but didn't budge so what happened in the past couple of years or so is that h not early doesn't budge if now one takes planck data out but now even cmb data out completely uh so before i think this this paper from which i took this uh this plot here is shannenberg 2019 if one wanted to drop a planck then had to use the wmap w map lassact and spt and always some cmb observations but it turns out that one can drop the cmb completely and even for model where one allows to have some extra number of effective neutrino species we can still see the tension so in this case it's again assuming uh lambda cdmx function history this is the constraint in the omega matter h naught plane that's the cmb planck this is just from the late time universe it's a barium acoustic oscillation plus bbn and helion abundance and this is the local measurement of h naught and i want to show the following aside because i think it's interesting so in the on this side is for the london cdm model and on this side is allowing the number of affecting neutrino species to be three so uh what what was done in this paper was to take the baron acoustic oscillation from galaxy the button acoustic oscillation from the lima alpha and since this arbarian acoustic oscillation measurement c in a different ratio then this degeneracy gets broken and the resulting constraint is down here if you don't want to believe the lyman alphabetion acoustic oscillation because at the beginning there were some two sigma signal that went then went away with the reanalysis you are free to do that then you just add supernova measurement again as a relative distance absolute distance and that's where the supernovae will put you in this plot so um and again this is the e boss uh latest data release dr-16 from late last year uh again the things repeats you can do it even without cosmic macro background data so what is the problem is it any specific data set well it's not in the cmb data uh all early universe based determination still hoover well below 70 and many group have reanalyzed this shoe data and several independent low ratio determination also hover above 70 kilometer per second so as time goes on and more and more of this analysis are coming out uh this seems less and less and less likely because you can combine the data the the different measurement as you want there is always this this kind of the cotton so if the problem is not in the data is then maybe in the model and now since this is ggi talk i could not stop myself from showing the cover of this very famous book which is when somebody famous as well said maybe the problem is in the model and not in the data that did not end up well let's see if we end up better here so uh early time measurement assume lambda cdm in the sense that assume early time uh part of the assumption that going to land the cdm and effectively this assumption gives you the length of the standard ruler and low ratio measurement then do not do assumptions about cosmology they may do other type of assumption but they don't do assumption about cosmology and that's how we end up with a plot like this which is now has been updated to more recent data from the first one i show you that was from 2016. so where shall we look shall we start looking at tinkering with the model pre-recombination of after recombination well let's look at the prerecombinations solution so uh the slightly annoying thing of looking at pre-recombination solution is that you modify the model the london cdm model that we all know all know and love right where we must like it right where it's the most simple description and where you had the gray it's greater success and so you want to this decrease the sound horizon by something like seven percent but you have to do that without breaking avoc on the damping tail keeping consistency between the temperature and the polarization and without breaking avoc and everything else that the london cdm still still fits so well this is a tall order it's probably not impossible but it's a tall order and so in this uh uh suites of paper by knox millennial uh they uh work through this argument very carefully and then they show you in terms of rashshift the sensitivity to rs to change in age as a functional rush shift and basically what it turns out is that if you want to modify rs you have to act there this is your room for maneuver to reduce res because your input on rs come from this range and the the one of the first model that was proposed to do that this early dark energy model which keep appearing uh every so often because it now becomes you know the comparison model when you want to change early time phases to compare yourself to then that's something like this here so it really acts there so uh early dank energy uh still affect the dumping tail so you can look for the signature and that's why that at least in that original incarnation of the model it could ease the tension but not really fix it at all or you could try to change the initial condition or you could add extra component extra interaction or some localized energy injection there are being some model proposing some strange neutrinos with strange interaction high temperature recombination or even designer model where you can just change the early time h of z to actually get what you want but all these possible solutions are not all equivalent they will leave their signature in the specific shape of the dumping tail of the cmb and it will have to not break the nice consistency i repeat again between temperature and polarization so for coming data we'll we'll be able to put some constraint on that but this model i mean there's not much wiggle room because exactly because of the quality of the data that we already have but what about post recombination so there's a plethora of model and uh this paper by elena valentino and collaborator has been cited already is 120 something pages that goes all through the possible solutions so let me just give you a an overview of some sort of big picture you could try to include the screening and modification to gr [Music] my take it's a little bit complicated as you would have to affect several things l1 including time delay distances one possibility is also to include the freedom increase the freedom on h of z again another sort of designer say model but the price is high because you start adding manix many extra degrees of freedom and this reminds me of ap cycles or you tend to hide any deviation to where there are no data or where the data have a little statistical power and also if the data now have little statistical power they will have more statistical power in the future which is which is fine you know we want to be able to um falsify models but you know when your solution put pushes all the changes where there are no data then you should be a little bit suspicious uh so it's also very hard if you now try to do things prayer recombination to change rs by seven percent or you want to do it maybe past recombination by mimicking that the arrest that you see the sound horizon that you see on the in the bao is not the same as the one imprinted by the early time physics one end up having two thinkers with a megabarian with omega matter by 20 30 percent uh so it's hard to just mess around with the standard ruler as seen in the bao so how much wiggle room is there in late time physics so as i say you can do an h of z divided by h naught reconstruction and this is uh this game has been played a lot since 2015 2016. this is the latest incarnation from a pre-paint of that came out last month by jose luis bernal and collaborator this is some s-plane reconstructions using bao and supernovae look at the range this is minus five percent plus five percent compared to the standard landa cdm model the red shaded region is the prediction for planck for a lambda cdn model and this blue shaded region is a lambda cdm fit to the same data from low reshift data so the interesting thing here is this sound horizon radiation drag times little h which is the quantity the model independent quantity that the bao plus supernovae measure in a generic reconstruction is this and the error bar one gets from a generic reconstruction is very consistent and also the central value with the value one get assuming the cdn model in this reconstruction and this is the comparison with the cmb so the in this term uh all the observable are are consistent with each other even if one drops the assumption of lambda cydia and the other thing is that there's no much room here again [Music] this is just the freedom because of the statistical power of the data but there's and because they are the amount of free parameter that has been thrown to this reconstruction but this is oscillation at the level of five percent why the solution we are looking for is something that is well above five percent uh but the argument i want to make is that uh this is not just an h naught problem or an srd problem it is a mega problem two and it's an aged problem too so in this plot here if i now do h square versus omega matter i assume in a lambda cdm model the cmb puts you here the direct h naught measurement puts you here nbao plus supernovae again in uh without assuming any calibration puts you here so if you believe this then omega mater should have been much slower than the one that the late time universe tell you it should be so we also have an omega matter problem and the other quantity that comes along is also the age of the universe so this is for alanda cdm model omega mater versus h naught versus the age of the universe and this is by fixing uh the standard land of cement allowing the question of state parameter of dark energy to vary so how old is the universe anyway uh here i'm showing the connection between h naught and the in the age of the universe and for us to see the planck measurement here i had to do it by end and blow it up because the planck measurement is very tiny and then in a normal slide you wouldn't see it but you see that an age problem becomes an age problem too so now we go back to the 90s when stellar ages were used as a tool to measure the the age of the universe so if one can measure absolute stellar edges especially a very old object a reshift zero then one can provide an estimate of the current dysfunction rate or an estimate of the age of the universe uh so this reliant knows knowing or other background cosmological parameter that is dysfunction history shape although this is weighted heavily at low reshift so if one messes around with a rash if that the age will not be changed much or ratify enough at all uh so we tried to do that as a proof of principle uh a couple of years ago by looking at the published value for estimate of the age of the universe from very low metallicistic star or from the age of all the global clusters and but the error bars were still large enough to just give a hint of the fact that this could be useful in the future but not really conclusive so we decided to analyze a wider set of data and propagate the systematics and the statistical to have everything under our personal control and this is part of the phd this is of david valchin and this is what we come up with this is a reanalysis of 22 global clusters these are old globular cluster and they estimate from the age of the universe come out to be this which is in this plot lands there so if we believe this measurement it seems like london cdn acts his age but not this shoe size so now we are here and we are looking for a cinderella so we are looking for a cosmological model when we can fit all this measurement and in particular we can fit this shoe that is very difficult to fit so where do we go from here i mean i don't have a solution but maybe let's see if we can spell out what are the characteristics of what the solution should have so in this plot again we have h naught versus the age of the universe and here is this latest determination that i showed you about this is the determination from uh bao plus supernovae this is a planck and this is this early dark energy solution to actually tweaks the sound horizon radiation drug but you see this solution can't do too much because at some point you have not just the first peak but also all the other peak and all the shape of the of the angular power spectrum that constraints uh constrains you there and this is uh the published measurement about the tip of the region branch um so this reminds me of another plot that was published in 1999 where the cosmologists were grappling with the idea of of having to introduce something not so palatable to the standard einstein the sitter model which was the cosmological constant so uh this thread plot was was introduced when you have three quantity which need to sum up but one then there is a way to plot this quantity and this kind of plot to show that yes you had to add a lambda and then all the pieces of the puzzle will come together so now what we have here is our quantity that gets multiplied instead of uh added but then we can still make this kind of plot just we take the log and then a block like this will say if you have the the right model when you interpret your observation then all the constraints should actually meet somewhere if there is a mismatch and they don't meet then you don't have the right model to interpret them and so we call this the new cosmic triangle again uh the first author of the paper is jose withernal so in this plot it's age of the universe versus age time h and h so h can be measured directly the age can be estimated from old object and in a lambda cdm model this is what is given with this function history from bao plus this combination of bao plus supernova and you can see that in in this kind of plot adding early dark energy doesn't really help you with the iopla supernovae doesn't do anything to the age and it doesn't really quite get you there to solve the tension that you would have with the shoe's measurement but we have other triads the other triad is the one we saw before the sound horizon radiation drag time h the sunderizon radiation drag and h so this can be measured directly that the the shoe's measurement this is what the cosmic macro background gives you which depending on the model could be this one or this one now you see what happened when we've changed this function rate just before recombination right this is a big difference here and this comes from bao plus supernovae which is this the way the standard ruler is seen by those measurements and finally the other quantity it's omega matter h square h square and omega matter omega this omega matter is given by bao plus supernovae omega matter h square is given by the cosmic ray background and h square is given well by the measurement of h and you can see that in this projection messing around with the early dark energy or the expansion history just before recombination doesn't really help it does something that doesn't really help so let's go back to theoretical solution they should not break havoc where havoc is not needed they should preserve all the good agreement that the landacydia model has with the data at the moment and should improve or at least not worsen other tensions among them the sigma eight one which is at the two two point something sigma level but you should not make it a four sigma trying to you know fix something else we should also try to quantify improvement versus predictability this it's a way of repackaging the old argument about the degrees of freedom and let's remember the parallelism with with lambda we should also keep in mind model dependent versus model independent approaches a model-dependent versus model-independent measurement of observations because sometimes it's very easy to get confused and say you know planck tells me that h naught is that no plancks tell me that each note is 68 if i assume a flat a lambda cdn model with standard early time physics otherwise it's not that and we should always ask ourselves at what point is one just adding epicycle i mean i'm not saying that ep cycle are not useful uh the copernican model had epi cycles but those epicycles were there to tell you that the orbits are not circular they are elliptical so they were sort of useful epicycle while the geocentric model had epicycles everywhere and they were just there because they were completely in the wrong on the wrong track so again if we're looking for cinderella the discrepancy uh it's between model dependent and model independent determination of h naught if it's not in the data then we have to dare to say the maybe it's in the model we can try to boost the rate before combination then fix the ladder you can think about a lower shift solution but this is a very limited wiggle room and the trouble goes well beyond the h knot and the distance ladder it also affects the matter density and the age of the universe so the age is insensitive to things like dimming screening deviation from gr and distance measures so if a high age of the universe is confirmed model with a high h naught and standard lower shift physics are disfavored so at this point i can think of two scenarios for solution either local or global what do i mean by this so a local solution it's something that affect only the local h naught measurement you can call it astrophysical you can call it cosmological you can say that there is some screening and therefore the quantity i'm measuring are screened by some effect that is happening in a different regime and therefore i measure a higher age not here but it's not the it's not that i measure globally leaving everything else and change or you can think of global solution but if these are global solution it looks like the new physics that you want to invoke need to affect the entire history both early and late and therefore it will impact quantity well beyond the h naught and will show up in new cosmological observation because there's there's no way you can tinker only one thing and fix it basically so to conclude i hope that by looking at this sort of over constrained system that are all represented in the same interlocked plot this representation of this observational constraint will help discriminating between these two possible scenarios and help guide future effort to you know find the solution to the hubble troubles thank you i stopped here thank you lisa and i i open the discussion now i again thank both the speakers for their very clear and um very nice presentations especially thank you for keeping the time so let's uh have our discussion uh please david you were there before uh thank you my my question is for adam and it is a an historical question it's about 45 years ago that i started teaching cosmology and in those days we tell students that the present value of the hubble constant was uncertain to a factor of two but we had to give them a value and we took alan sanderger's value of 75 kilometers per second per mega sarc from a paper that's now more than 60 years old now of course he didn't have supernova data so adam if i were to take away your supernova data please understand i don't take away your nobel prize take away the supernova data and just take the very good new measurements along the lines that allen was using what would be our uncertainty and the value of the hubble constant um oh my gosh i'm trying to think what in that paper he was using to reach out into the hubble flow was he using was using he was using he was using separate variables yeah yeah but you got to get out into the hubble flow i mean cepheids before hubble space telescope you really could not observe sepia it's beyond you know two megaparsecs so you're not in the hubble flow he might have been using brightest cluster galaxies as a distance indicator um you know those ended up giving you know not the right answer for even um omega matter you know um before type 1a supernovae we did not have very good tools for going way out into the hubble flow and if you don't get out to at least you know above redshift 0.02.03 um you know you just haven't really sampled the hubble flow so you know i think that's why i mean alan sandage knew very well all of these interesting problems to measure q naught and h naught and there's a reason that these were not really cracked until the last 20 or 25 years and i think a lot of it has been type 1a supernovae i mean we didn't even hadn't even distinguished type 1a supernovae as different from core collapse supernovae until the mid 80s so he was just lucky he was just lucky um you know i i don't know what his error estimate was either um you know even uh what you know even a broken clock is right twice a day um he i i looked at the paper just uh just now and uh he doesn't give her a box he says within a factor of two which is what we tell people between 50 and yeah yeah yeah but but he gives some variant uh estimates of 83 for instance as opposed to sure sure okay anyway it was just a historical question yeah yeah yeah please understand i don't want to take away your nobel prize i appreciate that thank you thank you and by the way adam if you want to uh ask questions to to lisa or vice versa uh of course you're welcome to do so yeah yeah actually i have a question for licha um just something i didn't quite understand or or if you go back to your slide maybe you could put it up the slide um uh we called beyond h naught where you first started addressing the age thing i was confused about the interplay between age h9 and omega matter okay let me look at blue [Music] blue curves before earlier uh wait this earlier there this one yes so this is what i was confused by is okay let me put that with the measurements too yes [Music] yeah okay yeah this is whatever can you go if you go one slide earlier than this the combination of bao and local h naught you said we're arguing for omega matter lower more like .26 um and so if i then take the combination in this space on the left of h naught 73 an omega matter of 0.26 i'm still at a pretty high age i'm at like 13 on the left there i'm at 13.5 gigahertz right you want here yes so all these uh bars are are roughly one sigma so you know you can go another sigma either way and still be fine okay but in the earlier can you go back one more slide this one yeah that's the one no this is around point three still uh yes but look at the com just saying look at the combination for example i think this is what you said it's an omega matter problem too so shoes plus cmb down there gives you omega matter of 0.26 yep so if i go back now to the next slide yep right look at the spot omega matter 0.26 h not 73 right after 13 and a half giga years so i'm just confused obviously all three of them can't sit together but two of them you know including local h naught uh and cmb are perfectly happy with a 13 and a half gig year old universe uh so if you take this error and you go so so you if you take this and you go another couple of sigma this way then fine yes yeah but i'm not see this is the thing is we you're sort of saying you know there's three different measures they don't agree do two of them you know agree or you know do ages tell you something and i i would say okay the three don't agree ages won't tell you why they agree but ages would agree with any two of them uh so again you you will you will go back to having to go to these uh plots like this where you make this uh triads you measure each of them but it's an over constrained system because in one case you measure say this in one case you measure this and in one case you measure the products and you can guess what i'm saying here is your your h naught axis doesn't imply an age without an omega matter yes exactly exactly yeah okay so that's why you have lines here and you don't have uh close circles unless you do cmb and then in the cmb you do get you know a a potato there very good uh actually since your slides are up um i have a question about the reconstruction uh easy over eat lunch of the yeah i guess i'm a bit confused right i'm slightly confused by the directions of the effect so the blue is just uh the the background the fitted on b.o and supernovae this is beyond supernovae exactly but uh just saying let me plot uh e of z not h of z okay and then i plot it as e of z divided by e of z of planck but since this is u of z at the attractive zero this is uh this is one by definition because it's not divided h naught whatever h naught is right but but so like you're saying that at any rate shift between well 0 and 2.5 here the uh the values that are given by fitting bo plus supernovae are always lower than the values of given by fitting planck 18. am uh okay so if if i do a non-parametric reconstruction i get all these uh lines here and these line heaps sorry and this line here happened to be formally the best fit line but you should really look at this region okay and then here of course uh you have reduced error where the data have more statistical power and larger error where the data you know how you you stick together bao and supernovae and uh so if you now say i i want to fit this data with a lambda cdm model then you got the blue one because then you have only one free parameter which is omega mata and within a sigma it's uh it's it's it's fine with blank yeah but it's consistently lower so sorry interesting no no i mean it's it's consistently lower that's kind of surprising i guess but yeah maybe yeah it's consistently a little bit lower it may be driven by the higher shift point you're putting lima alpha and no ah okay okay okay thanks um please uh um so any questions or comments yes please hello lenders yes can you hear me yes yes yes sorry for the i just wanted to say that the perception can put this hd rotation as an m tension the absolute magnitude of supernovae yes you know this defined m is higher than the planck m obtained from the inverse distance ladder how would one consider the possibility of a transition of m from a higher value to a lower value after zero zero zero point zero right i'm not well i'm not proposing this i will say i mean it is true that the strongest constraints on h naught locally come through a constraint on the absolute magnitude of type 1a supernovae um and uh you know one could imagine playing games like saying oh if you change big g that's part of the definition of the chandrasekhar mass which of course is you know goes with the luminosity of type 1a supernovae however having said that when i look at measures that are independent of type 1a supernovae of local h9 here i would really identify i think some of the best studies recently uh the study of surface brightness fluctuations from john blakeslee which uses sbf in lieu of type one a supernova gets 73 plus or minus two and a half or masers directly into the hubble flow that's geometric uses nothing else except geometry and also get 73 and and change plus or minus two and a half or three so it's not like type 1a supernova distance ladders are sitting separate from other measures tully fischer is another example um however you're right that it's important to remember that this often comes through constraints on the absolute magnitude of type 1a supernovae which is why you can't just assign i think george wrote about this in a paper this morning you can't just assign a value of local h naught and then play with the type 1 a supernova hubble diagram not realizing that those are correlated uh because the measure comes from type one a supernovae so you know this all comes back to the same thing which is you know we need a spec you need a specific story or hypothesis and then you need to look at what the data is actually constraining uh because you know sometimes this naive idea that it's just a local h naught measurement and a prediction of h naught uh you know is not the whole story for a specific uh hypothesis okay thank you good thanks um so other questions comments problems so alicia just uh um a thing that you said india the end it was interesting so it seems that you're saying uh the the solution should be either local or global but like by local you mean really like very very low right shifts or so because usually we see solutions as early time late time but you're you're advocating something else it looks like a void would be local okay so yeah by uh uh by local in this case cano below laura shift because we're trying to do all this laura shift and you know the omega matter you get from bao plus supernova assuming a london cdm type is function agrees with the omega matter you get from the cmb when you do this function history you don't have much uh you know wiggle room to go too much away from a lambda cdm so a low reshift solution can't change too much this function history because we got constraints there we have to be within these guard rails of course if you just say there is something that tells me that when i try to measure h naught locally i get a higher value but then when i look at the global h naught the global expansion rate it's different then that will fix everything including the ages and included in sound horizon etc but i don't know what that may be it has to be local enough that you don't see it in the bao and you don't see it in the cosmological supernovae otherwise if we would have seen it it can't really be avoid because again unless you are in the center it will give you a dipole and as far as i know we haven't seen that dipole i mean we have seen a dipole but the dipole we have seen is the cmb dipole which is more very interesting and then if we were and and also you would have some kinetic z effect right because you have all these velocities and we haven't seen that huge kinetic asy either uh [Music] and if it were a void by the way the redshift should be like less than 0.1 like much less than 0.1 probably like very local like really we we should be in the center and the void that should be so big well you know you got to be um yeah i mean where's the best place to put the void you know so i will tell you it used to be you know 20 years ago when people were trying to explain away acceleration they did the same trick but they put the void cleverly at 0.2 because we had no supernova at 0.2 we either add them you know below 0.1 or above 0.3 because of the transition as you go from looking galaxy by galaxy for supernovae to looking in volumes well wide angle surveys have really solved that problem so now we're well populated at 0.2 so you know it's just it's very hard to hide i will tell you the people who push hardest on the void stuff people like tom shanks um they have been motivated by galaxy surveys that don't cover the whole sky but still assuming i don't know isotropy or something uh you know they'll put it at redshift 0.05 0.07 and then when they try to constrain it with the hubble diagram of type 20 supernovae they find that you know at most they can have a you know one percent void or a two percent void in the hubble constant not a nine percent void um you know however i don't want to discourage people from if you can think of a solution that only gets halfway there um that may not be so bad i mean you know it could be that the the real early universe value with more cmb data comes out at 68 and a half it could be the local measure you know is on the low side 71 or something like that so you know half measures are are interesting i would just discourage you know one-tenth measures or you know things that you know don't even get you halfway there as you know probably unlikely and break other things in the middle and break other things yeah yeah okay but then uh on the completely other hand at the end um you're saying that maybe instead a combination of the late time and early time could work so at that point you need to combine both uh early time and late time because we've seen that early time does something but doesn't do the whole thing and and then you are left with a mismatch on h naught and again but h naught then fits directly into the age and the age is is weighted by the lay time expansion of the early times function so you can hide all you want in the late times function but still that integral is weighted by delay time so it looks like you have to touch both but the moment you touch both then you start touching omega matter sigma 8 age rs you start touching everything and you you can you have some wiggle room but you can't break everything else that fits right exactly yeah and yeah that's you know a a historical parable i like because you know for every historical parable and example there's an opposite example um and so that's why i love the story of uh you know the planet discovery of the planet neptune and the the failed discovery of the planet vulcan um you know as you know both cases people seeing something that wasn't fitting you know the orbit of uranus or the orbit of mercury and you know in one case it was a local solution you know neptune in the other case it was a global solution general relativity i also love it because it tells you you know who would have thought the procession of mercury you know was the observational clue for a new theory of gravity and so i i think it's important to keep in mind you know that example just as that history is not a great guide we're telling you what the solution is because you know they're you know two planets acting weird we're completely different answers so you know we have to sort of critically think about the situation in front of us uh and not you know resort to history to tell us what usually happens i just wanted to make a back to the avoid issue an important issue is the size of the void we've measured a 300 megapart parsec void and you get best fits with the metatome and bondage solution if you have a bigger void getting towards a gigabyte gigabyte by passing size so also the question of we're sitting right at the center you can be away from the center you're producing an isotropy because of the spherical symmetry and uh but then the metatar bonding model is it's very simple it's a toy model uh it just happens to be an exact solution so the problem is being in a very special place i mean we don't see voids this large in redshift surveys and if you're not very close to the center you build up a dipole in the cmb that's thousands of kilometers per second not hundreds of kilometers per second so you know then you need some anthropic argument why you know we're in a very uh special place in the large-scale structure you know the biggest void that anybody's ever seen very close to the center um and so you know and then finally you know you have to you have to hide it in the hubble diagram of type one a supernovae because if it's that big then you see an inflection in the hubble diagram as you cross over from you know inside the void to outside the void so you can do it at the percent level i'll give you fine-tuning yeah well you can do this also fine-tuning in the standard model yeah so the the bottom online issue is that we all have the simplest solution because it's simple we can calculate with it lambda cdm which is a marginally symmetric solution for cosmology but maybe it's just it's too uh simple it's too tight and it could be that eventually through in homogeneity that we have to change our mind about that that if we can't solve the h naught problem with lambda cdm as may it turns out to be the case then we have to change our mind about these issues yes but that's exactly why i think it's it's interesting to start thinking in terms of model independent interpretation of observation and which are of course is not what we're used to do and it gets you much bigger error bars of course but uh maybe at this point it starts to it's time to start thinking outside the box and and see what the model independent statement one can make and then and then then from there then map them into different possible model that will match them so for example if you you if you only base the your interpretation of observation of the eternal relation and just do ah [Music] you know an expansion history as a function of russia which is minimally parametric that would do and then you check what matches what model matches with that see the other big question of course about the whole cdm model is dark matter the latest experiments appear to have ruled out the la the timer libra annual modulation and more data will i'm sure rule it out so at present we have no experiment that confirms the existence of a dark matter particle after some decades of work what does this mean i mean we have to understand this because it's 85 supposedly of all the matter in the universe and we can't see it ultimately we have to be able to see experimentally what the universe consists of yeah well i mean i think you know until there's a more compelling alternative you know we continue to think about particle dark matter but i certainly agree with you i mean we need to keep searching for experimental evidence of it and you know you have to keep an open mind i mean you know the art of science is knowing you know which uh of these problems and how to sort of combine them in your mind uh to think of new ideas and um you know i i think you know if somebody had a great idea uh for solving multiple things and it didn't involve particle dark matter you know then see if it conflicts with anything if it doesn't it's you know be very viable well i've been pursuing the other alternative of course is that einstein gravity is not correct or newtonian gravity has to be modified and it might be considered a radical point of view but maybe it's more conservative because all all the matter in the universe would consist of of baryons and we observe baryons by the way and uh gravity maybe the solution is just a purely gravitational theory with the extra degrees of freedom to to resolve the problem einstein modified newtonian gravity to get the perihelion advance of mercury and which by the way was a less than 0.1 correction in the data very small correction so most of the research has concentrated on dark matter models entire matter mars are not a theory you don't start with an action principle like you do with general relativity it says it's a model with various mass distributions and mass profiles thank you john let me just uh interrupt you because there is a just a question uh in the chat probably i guess the last since adam has to go so somebody is asking about the change of the fine structure constant recombination which would change the time orionization uh so in particular impact on the sun horizon and the gas for early polarization too uh so he's curious to understand whether like this could be something possible maybe alicia can answer this yeah i guess so uh so as uh as far as i know uh even before planck uh there were some analysis done by changing some of the fundamental constant and see whether they could fit they could actually constrain a possible change on the variation of the fundamental constant from what we observe of the cosmic macro background and the constraints end up being so interesting or uninteresting to the point that in the latest planck release this was completely forgotten and not even mentioned so it's uh it's uh my impression is that whatever whatever wiggle room remains is not good enough to actually explain this but uh thank you for the answer so i'm i asked the question yeah just ask another one to follow up i thought that this was assuming the lambda cdm model so maybe if or no i'm not certain if uh lambda cdm was an input to this and then you would conclude that alpha does not change okay so what uh what people usually do they just especially when analyzing the cmb they just uh take london cdma and they do extension one one at a time so this was probably one of the extension over the london cdm but what else would you have in mind because if you want to affect rs we know that early time physics does not really change rs don't you need to if you're going to change things like the fine structure constant but you actually need to change the physics in uh cam the cnn yeah exactly yeah yeah so there are codes generally people haven't really done that i think but now there are codes that do that but they were run before the planck release and i think they concluded that it's not promising and therefore they were forgotten now i know that in plank in order to actually get to the level of the precision of the data they had to reconsider the entire hydrogen ionization and all the hydrogen level etc etc because otherwise they wouldn't match the data with whatever model so we are at that level so make a comment about the change of alpha i mean it's something that dirac was very much interested in i remember talking to him in in 1981 and he was invited to a conference on magnetic monopoles which he'd invented and refused to go he wanted to talk to uh geologists about whether newton's constant might be changing we have to remember that our planet is one-third of the age of the universe and there are naturally occurring nuclear reactors so we can actually look back down there and say that alpha's not changed very much in the last uh the last third of the age of the universe so you'd really have to invent some very time dependent time dependence these models exist just to say yes uh okay carlos martins just uh one very quick question since we are at the two hour mark uh by the way uh let me remind you that there is a form uh for offline questions if you have anything and yeah yeah so apologies for not switching on my camera uh yeah i want to comment quickly on the on the varying alpha things so certainly the um that a change of alpha will imply changes in in the physics and expansion history of the universe um analysis that have been done with the cmb uh are not completely self-consistent in the sense that people typically change off and keep all the rest of the physics the same which will not um which will imply some some errors that said the constraints that we have from direct measurements of alpha relatively low redshifts and even indirect constraints from bbn are already too tight to to make this a viable mechanism to solve the tension so the effect is there at some level but it's a um it is even less than a 1 10 effect as adam was putting it which of which doesn't mean that people shouldn't do the calculations properly of course but it's it's not a solution okay uh well uh so uh let me thank the speakers we free adam to go give his midterm and uh yeah we also say bye to lisha who was on wednesday and uh uh this was a very nice uh meeting uh i personally got some interesting uh ideas uh about this is this h0 and uh i hope our our speakers too thank you and uh please if uh you can send the secretary the your slides it would be good to have them okay all right stay well thanks everybody bye thank you bye you
Info
Channel: Galileo Galilei Institute (GGI)
Views: 344
Rating: 5 out of 5
Keywords:
Id: 07-_Jhwzghc
Channel Id: undefined
Length: 121min 37sec (7297 seconds)
Published: Wed Mar 31 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.