Determining Cosmological Parameters from CMB & LSS - David Spergel

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so good morning um I want to start actually by referring answering one of the questions that was raised in the last talk because someone asked about this low point the quadruple so let me just remind you what we're plotting here what you're seeing here are measurements of the amplitude of the power spectrum versus multiple moment and the black line here is our theoretical expectation the points are observations here drawn from four different experiments and I'll return a bit to this theme later in the talk while the things you should know about the current status of experimental data is that the experimental data is rather consistent different experiments see the same sky which is reassuring the theory is mostly consistent with the data so we'll talk about two intriguing discrepancies today but there is this interesting point the value of the quadrupole the amplitude of fluctuations on the largest angular scales um is low and it's low in a way that not completely inconsistent with the data and that's because of cosmic variance the theory doesn't predict the amplitude of fluctuations it predicts the variance in the power spectrum it says if you have many realizations of the field what do you expect for the amplitude of fluctuations what's what should be the amplitude of that multiple when you get out 2l of 1500 and you measure 3000 independent multiples you know you get to average over 3000 realizations your expect the value to be this cosmic variance scatter is quite small when we get to l equals two we really only get to measure five independent multiples and because we have to remove the effects of the galaxies and the Galactic disk actually looks like one of the multiples we really only get to measure for independent multiples so we get four measurements we're drawing from a Gaussian distribution we expect a value of around a thousand microkelvin squared we observe a value of about 200 now that happens about one time in twenty one time in 30 it depends exactly how you ask the question so it's not doesn't rule out the model but it is intriguing on the largest scales and I want to begin with a plot that I've stared at for about five fifteen years and still don't know whether it's significant or not and that comes from instead of working in spherical harmonics so just let me remind you of how this goes we usually take the temperature function temperature on the sky expand it out in spherical harmonics the sisa bells that I plotted the theoretical one is the expectation of the ALM s can everyone see this by the way okay we observe value you take the observations sum over m and you just get the measured value so that's that's what you saw on the plot before that's hard to see let's see can everyone see if I write here or should I pull out the board so the fisa bells is the expectation of the a LMS where I've taken the temperature and expanded it out in spherical harmonics another way is looking at things instead of looking in Fourier space you can look at real space and ask what's the correlation function at some separation on the sky some separation cos theta where we take pairs of points separated by angle cos theta so go to every pair of points separated by a given angle measure the correlation function this is just effectively the harmonic transform of the other one in fact C of cos theta is some C sub L PL cos theta times if I remember my constant 2l plus 1 over 4 PI and what P sub o is was under polynomial so I can this is just another way of representing it and what's intriguing you take the data compute the correlation function here's what the correlation function looks like it basically goes to 0 at around 60 degrees and just stays and this is the same thing in the Planck data where again you see the correlation function just sits flat right at zero over this huge range of angles there's basically no correlation between fluctuations beyond about 50 degrees and if you ask what does the theory predict that's the black line here it predicts there should be an anti correlation here and some correlation there this is the one Sigma contour point-by-point these are highly correlated point it doesn't deviate that much this is another way of well expressing the fact but when I go to the data that most of these points lie well within the expected range but they do happen to line up in just the right way then when you take the lives on to transform the correlation function 0 and this was clearly seeing the W map data you see the same pattern in the Planck data and this is either a statistical fluke or pointing to something really profound about the origin of fluctuations that we don't understand and since most of my talk and most of these lectures are going to be a pointing out how everything fits together I do think it's important to talk about what are some of the potential oddities in the data we have a tendon see as scientists over time sometimes once you know this period of time was sort of standard models being established to sometimes forget about some of the discrepancies and since you know when I was a student none of these things were known the lambda-cdm was not an established model but you guys are being educated a time at which this is now an established model and that's why I think it's important to not push under the rug some of these funny features of large scales the other one that gets some discussion is what's called a hemispheric asymmetry this is a map of the sky it's the W map and Planck data look identical at large scales effectively so it's whatever showed up in W map you see is to get in plank on on there's these funny large-scale features and if you look at the amplitude of fluctuations in the Northern Hemisphere in the southern hemisphere I just smooth the sky and measure the amplitude north and south there's about a 10% difference north to south this is about a three sigma discrepancy my own view is it's very likely this one in particular is very likely just a statistical fluctuation because it was completely R + D or you go look at your data and look at arteries usually if in physics if we have some three sigma discrepancy we get ma really excited and then say good let's go get more data the problem we have in cosmology when we study the largest scale is this is all the data we get on microwave background temperature we observe the whole sky there's we can't repeat the experiment and expect to see anything different this is the sky we see we could decrease the error bars on the measurements but that's pretty unimportant those are those error bars are relatively unimportant the at this level the errors on the map is tiny the source of uncertainty is cosmic variants here because our theory is actually not doesn't predict the amplitude it just predicts the variance so making and the cosmic variance errors are lower variance uncertainties or not errors their uncertainties are much much larger than our experimental error so we can't do a better experiment so this these discrepancies are potentially interesting physics at a minimum it's an interesting place to be I think philosophically because we're at a place where we've actually reached in some ways the limits of our ability to measure the universe right we can only see fluctuations within our causal horizon yeah if you could observe the fluctuations in the neutrino background you'd have another shot at this we also have some more additional information potentially from polarization measurements and we have some more information potentially for measurements of the large-scale distribution of galaxies if we can probe galaxies do not probe out to the volume of the microwave background but if you looked at say the large-scale distribution of quasars at redshift six that had would have the potential to probe on these scales so there are a handful of things we couldn't we can get a few more looks at these larger scales right now all those things are difficult to do experimentally looking at large scat these very largest scales and galaxies and quasars surveys were limited somewhat by for the quality of our photometry and the effects of dust we hope that's of the next generation large-scale structure experiments particularly the space space one like Euclid W first might make some progress on that fluctuations in the neutrino background I think are in the long run very interesting but you know we will not likely have neutrino background fluctuation measurements in the next five years but you know hopefully you know of detection hopefully hopefully in your careers I'm not betting on mine yet I'm 56 so you know yeah there's a couple different things to do it's actually a it's a hard question to ask is a without having alternative models right so you can if you construct a alternative model you can measure the relative likelihood between the two models that's really what you'd like to do but there aren't if I can write down an apostille remodel that's designed to fit it that's not very physically well motivated and then will be more likely you can the simplest way to ask the question is I can predict say for the quadrupole on what's the probability I can ask the question what's the probability of measuring of quadrupole smaller than 200 micro Kelvin and that's a couple percent so it's a number that's intriguing but it's not a part in 10 to the 10 it's not something that's so far out in the tail that you would say it rules out the model and what I one of the reasons I'm you'll find that sometimes people quote slightly higher numbers in the literature when they claim to find these discrepancies but they tend to you know for exam looking at this hemispheric discrepancy they'll tend to choose an angular scale that maximizes the signal and then ask how likely that is and since you had no theoretical prior on what the scale is I think that tends to that does tend to overestimate it as a effectively look elsewhere effect so if you then ask the question let me look over all possible scales how off and all possible orientations how often do I see this then it usually admitted it those things end up being less significant yes so you have I mean the lambda-cdm model is vert is predictive it tells you it for Gaussian random field what you expect and you know when you've got something like that and you see one of these discrepancies people tend to hunt for the discrepancies and you have to be careful of assessing at statistical significance I think they're worth talking about because if someone comes up with what you'd like to have is someone come up with an alternative theoretical model that fits that data and makes other predictions that's the way you'd like things to advance an example of an alternative model that does a better job of fitting a correlation function is if you assume that the universe is finite if you assume that we live in a three torus and the size of the three torus is comparable to the visible universe then that means you're missing modes on large angular scales because you don't have modes on scales large room the three torus that suppresses the quadrupole so I can write down that model it will fit the data better but that particular that part that the correlation function better but if we lived in a three tourists then when I look at that part of the sky and that part of the sky I should see similar patterns and in fact you could quantify that pretty well within the three to remember when we're looking at the microwave background we're looking at a sphere around us and the fact sphere is embedded in a three torus then when we look over there along a circle in the sky where the three tourists intersects the sphere the surface of last scattering a where it intersects the three tourists that I would see the same patterns in both directions and we've looked for that and we don't see it and in fact you can look at all possible patterns of circles and any regardless of the underlying topology so regardless of what the foot the shape of the fundamental domain is the surface of last scattering when the your fundamental domain crosses the sphere you'll get a circle so for any topology you'd get pairs of circles we've searched all possible circles we don't find any matches beyond what you expect from random and based on that we can actually say that the size of the fundamental domain of if the universe is finite is at least 80 some odd Giga parsecs so our universe is very big basically as big if not bigger than the surface of last scattering once you said it's that big it no longer explains the quadrupole being well so that's an example it's what I'm fond of having worked on it that you look at this you propose something that could explain some of these oddities you texted the sails but you know that's they're made there are certainly other hypotheses all right so let me now turn from large angular scale temperature two measurements the microwave background polarization so remember when an electron comes in in Thomson scattering if you're have an electron come in where the e field is pointing this way it's more likely to scatter to you like this electron comes in with the source photon comes in with scattering off an electron with the e field pointing this way it'll scatter to you like this so in this polarization you tend to see photons smagin amin electron that come in this way and scatter toward you and in this polarization you see photons would come in this way and scatter towards you so when we look at the surface of last scattering measurements of the photon distribution measured in each point on the sphere right so you see in one polarization what's we have an eraser I'm looking at the surface of last scattering here a hot region here a cold one here and a cold one here I'll have more photons that come in this way and scatter to me with the polarization vector pointing this way so I'll tend to measure the polarization vector on the sky is related to the pattern of temperature fluctuations so I can look at the sky and I'll see at each point polarized emission and the microwave background on small scales is actually quite polarized with polarization fractions about 17% so it's as our experiments become more sensitive we can make pretty accurate polarization measurements and what we do is we can take this pattern of polarization and we can decompose it into patterns that are symmetric among their reflection these are gradient like and we call these --mode polarization fluctuations and those that are anti-symmetric under a mirror reflection and we call these d mode fluctuations variations in density scalar fluctuations produce only --mode like fluctuations at the surface of last scattering vector and tensor modes produce both a and B modes and we'll come back to this in the final lecture because one of the very intriguing things that we're looking for right now in observations of the microwaves a friend or you know we're looking for r/b modes produced by gravitational waves in the early universe and those gravitational waves produce both modes but since scalar fluctuations which are the dominant form produce only DS we see B modes and their cosmological that would potentially be the signature of gravitational waves and that will said something we'll talk about in lecture three um we can also look at the patterns this way and this is again showing here's a pattern of hot and cold spots this is a single wave and you can see the e mode pattern is symmetric under narrow reflection and the B mode pattern is anti-symmetric when we look at the e mode fluctuations from scalar fluctuations we tend to be picking up two different things on the very largest angular scales scales of say 10 degrees or bigger we're seeing fluctuations from photons that were produced at the surface of last scattering register for the thousand travel to us at redshift n star formation has realized the universe so remember the universe's ionize beyond redshift of a thousand electrons and protons combine to make hydrogen the universe is neutral for a long time a couple hundred million years after the Big Bang star formation starts up the universe becomes dense enough to form galaxies one star formation proceeds it realises the universe and photons can scatter off of these electrons and produce a large-scale polarization signal in fact we turn this around and one of the ways in which we study the universe at redshifts an is by measuring the large-scale polarization signal that gives us an estimate of when the universe was honest on smaller scales what we're seeing is something more like what I described here that the polarization signal is generated by variations in temperature the dominant source of those variations in temperature are actually gradients in the velocity field so this these photons are hotter here because the electrons are moving in this direction and that produces a Doppler like effect that boosts the photons already talked about that so now we can go back to these patterns and you can see if we've got a cold spot around the heart ring if I'm out here with the hot rig and I'm sitting at this point here I will tend to have more things will be hotter in this direction in this direction and that lines up the polarization so I get this polarized pattern of the polarization pointing kind of in a circle on these scales and outward on these scales around each hot and cold spot and I'll have the reverse pattern so here's the pattern around the cold spot of the reverse pattern around the hot spot yep just l here is always about one degree and that's the angular scale cependant by these acoustic fluctuations what sets that scale and we'll talk a little bit more about this later remember we have the sound wave moving out moves after 380,000 years so it's about it's moving it about half the speed of light so the radius the diameter of that circle is about 380,000 lightyears you then have to correct for the expansion of the universe which makes it bigger by a factor thousand so that corresponds to about a 100 mega parsecs and and then what that ruler is the acoustic scale that we can then measure the angle of and that gives us the distance to the surface of last game so that's what we see when we look at that degree scale the fact that we measure both temperature and polarization is actually very useful for checking the nature of the fluctuations so this is what I went through over there are just the expansion I want to do that because we're going to look at the nature of fluctuations so what do we learn from studying the fluctuations and this is something people asked me about after lecture so I want to say a few words about here 1 and here I think the polarization fluctuations are very important is we learn what's the nature of the fluctuations are the fluctuations in density of the universe in place to place or are they fluctuation to the composition of the universe in place to place these adiabatic density fluctuations mean we have the same ratio of say dark matter to photons or baryons to photons everywhere but what varies is the overall density so the universe has uniform composition that makes a particular prediction for the pattern of temperature fluctuations and polarization fluctuations and again these plots are just you should think of them as nothing more than harmonic transforms of those pictures I showed you well you've got those uh peaks in the temperature spectrum that correspond to those features from the acoustic fluctuations and you can see when you for your transform the polarization fluctuations the peak positions are shifted relative to the temperature fluctuations as those rings line up differently and here's the cross correlation between temperature and polarization and a distinctive feature of any adiabatic fluctuation as long as you're doing things causally is that the temperature and polarization are anti-correlated on large angular scales and a lot of that represents the fact that if I'm setting up a pressure wave so I'm creating a pressure wave by gathering higher density of things here the velocity vectors point outwards from the density peak and if your velocity vectors point outwards that produces an anti correlation between temperature and polarization the other possibility that nature could have chosen to the wrong one oh no I copied the same slide twice okay the other possibility is the photon and dark matter densities could have been combined to be constant so you had it could have had a situation where the total density of the universe was constant regions have had more photons had less dark matter regions have had less dark matter had more photons this would be an entropy fluctuation and this could have arise if you start out with a uniform universe but the process that determines the abundance of dark matter very spatially this for example happen to phase transitions when if you do something like generate defects or cosmic strains of the phase transition then this region has more of one a defect than a success and that would produce entropy fluctuations entropy fluctuations produce a different pattern Microwave Background fluctuations for entropy fluctuations the peaks in the temperature are shifted by about PI over 2 in phase so the peaks all shift over in the top plot and the temperature and polarization fluctuations are correlated on large angular scales rather than anti-correlated if I start out with things being uniform and I want to create a density fluctuation I have to gather material inwards right and then the velocity fluctuations are correlated or point towards the density Peaks rather than away from it so one of the first things we learn when we look at the pattern of temperature and polarization populations is by measuring their phases that tells us that the fluctuations are primarily adiabatic and here we go with the fluctuations now we're actually looking at the data from the te and EE fluctuations and what's nice and the self-consistency test here if this comes from the planck 2015 paper where what they did was they took their temperature measurements they fit the best fit cosmological model and then predicted knowing what you see in temperature what the pattern should be in polarization and this is the so the red line here is not fit to the data it's fit to the temperature temperature data and then you predict what the temperature polarization data should do this again is not fit to the EE data it's fit to the temperature data and then you predict what the polarization should do and finally this is the galaxy baryon acoustic oscillation measurements this is the large-scale distribution of galaxies so what we're doing is we're looking at what the temperature fluctuations look like at redshift of 1100 380,000 years after the Big Bang and we're predicting what the galaxy's arm should be doing in the nearby universe and it matches quite not very nicely all the things line up and fit well and to me this is just a great well triumph certainly of experiment there's a lot of work of people to make these accurate measurements but also a theory and go back to a classic paper by Sanjaya Finn told ovitch in 1970 where they noted that a detailed investigation of the spectrum of fluctuations may in principle lead to an understanding of the nature of initial density fluctuations since the distinctive periodic dependence of the spectral density of perturbation on wavelength is peculiar to adiabatic perturbations and we're seeing exactly this nice periodic dependency predicted you know something I mentioned earlier on in this lecture that I think's just important to stress in terms of understanding where we are is in this field is the very nice consistency between the microwave background experiments you can actually look at this say multiple by multiple this compares in red the cross correlation between W math and Planck and black the plank plank Autoport sorry the plank black is cross correlation red is auto correlation a plank you can see multiple by multiple they agree very well when you get to higher multiple smaller regular scales the W map maps are noisier than the Planck Maps so the error bars are larger but the W map values are fluctuating around the better measured Planck matter values and on these large scales the two experiments agree remarkably well this consistency is true is there not only when you compare the two space space missions but also when you compare ground-based experiments and here's zooming in on a patch that's about five degrees by three degrees on the sky this shows the pattern of fluctuations seen by Planck above it the pattern seen by axis is at 143 gigahertz and 217 gigahertz and you can see up to the levels of the detector noise the two experiments see a consistent picture of what's going on in the sky once you've seen very consistent picture use some confidence looking at this data and trying to extract the basic cosmological parameters so the first parameter we can sort of read off by I looking at the fluctuations is the density of baryons and remember that when we look at the Microwave Background fluctuations we're seeing sound waves propagate in the early universe the rate at which those sound waves propagate is going to depend on the composition of the universe so as we vary the density of baryons and each of these curves are ten percent different variations the density of baryons they start to deviate from the current best fit model the more baryons you have the higher the first peak the lower the second peak in the microwave sky it's these measurements that tell us the universe is about 5% atoms what's very nice is we can actually measure the same number the above the density of baryons and we'd like to express it this way as Omega bearing on H squared where we've defined Omega very odd to be equal to the density and baryons over the critical density what takes to make universe flat and the standard convention is to find little H as the Hubble constant in units of 100 kilometers per second per megaparsec well we often express our results this way it turns out that you can just multiply by real quick we're actually measuring the density of baryons in the universe in kilograms per cubic meter or whatever unit you prefer and we've measured that to about 5% what's very nice is we can measure that to about 10% accuracy by measuring the abundance of deuterium produced left over from the early universe the early universe starts out well say it starts out very hot very dense but then cools about a minute after the Big Bang the universe is a sea of protons and neutrons the protons and neutrons combine to make deuterium once the cosmic background is cool enough which is about eight minutes three minutes after the Big Bang that the background no longer dissociates deuterium into protons and neutrons this reaction freezes out and then pretty quickly the deuterium combines to form helium for this reaction proceeds until the number density of deuterium times the cross section times the age of the universe gets less than 1 at which point the deuterium can't find each other that critical deuterium abundance is just going to be 1 over this combination here the age of the universe when this decoupling takes place depends on the photon temperature which depends on the number density of photons so the deuterium abundance turns out to be depending exactly on the same number the ratio of baryons to photons so we can get a very nice consistency check one of the great moments when I was working on the W math experiment was the night before we announced our first results I got a call from David Cutler who was at University of San Diego working on the deuterium abundance and he had much improved measurements that he was going to put out in the following months his results were far enough along he knew the answer but he hadn't gotten the paper out and he called me up and said I know you guys aren't going to change your results because you have an announcement tomorrow and I want you to know that we're not changing our results even though we'll see your measurements tomorrow I want to give you a copy of our best fit parameters and you have the numbers and he emailed me his measurements and they agreed within one Sigma with our measurements of the bearing abundance and for me this is one of the great tests we have with a big bearing Theory because we look at the microwave background are measuring the baryon abundance by fiscal conditions 300,000 years after the Big Bang and we're looking at how sound waves behave when we look at the deuterium abundance we're looking at nuclear physics three minutes after the Big Bang and they agree the fact that they agree actually tells us a lot about the concern the laws of physics not changing over time so if the strength of gravity was ten percent stronger three minutes after the Big Bang that would change the relationship between time and temperature what would be off if you're the deuterium cross-section depends on this you know the strength of the strong interaction for it strong interaction it depends on the strength of electromagnetism so if you change alpha a few percent or change the strength of the strong interaction or change the proton mass by a few percent all of that would lead to a discrepancy here so this agreement is actually one of the nice tests we have of the basic equations of physics being constant over space and time the next parameter we can read off is the matter density and we can either look at that in terms of the shape of the power spectrum or what I showed before and we'll go back to by looking at the pattern that we see in the microwave scar and finally we can read off the general diameter distance as Ichi Wong at Sita sort of the acting made a nice plot of this where we looked at the acoustic fluctuations in the act data so this is the same kind of pictures we showed before these are plotted in radians rather than degrees but to say this is again this is the pattern of hot ring around the cold spot in the act data before we showed that in the Planck data this is what we expect from lambda-cdm if we try to get the best fit without dark matter or without dark energy it looks nothing like the pattern you see that you know usually we express this in terms of tried to fit parameters to the CMB spectrum but you can really see it by eye that we get consistent sets of parameters in terms of the actual numbers we measure these are the current best measurements from Planck from the combination of W map and the act experiment these are completely independent datasets on the microwave sky you can see that we see a number a number of interesting numbers first in terms of the primordial fluctuations we if the spectrum of fluctuations was scale invariant we would get N equals 1 we see a spectrum that is a little bit redder than N equals 1 we see about a 5 Sigma deviation from N equals 1 this is actually what you expect in inflate in most simple inflationary models we if the in photon was higher up on the potential at earlier times rolling down the hill when we look at larger angular scales we're looking at earlier moments of inflation we expect that the universe is expanding more rapidly then so this is what we see in the spectral index is certainly consistent with inflationary models we are very accurate measurements now of the dark matter density in the baryon density with about five times as much dark matter as baryons and it's been intriguing to me to realize that all there are many different components of the universe that all have comparable density today all right so if we look at the density and dark energy today it's about five times the density and dark matter the density and dark matter is about five times the density and baryons the density and baryons is you know about let's get the exact number but it's about five to ten times the density in neutrinos in the most massive neutrino species there are three neutrino species is sort of an order of magnitude between each of them and their contribution to the energy density and also you know the energy density in photons today is about 1 mm the energy density of dark matter 1 400 the energy density in baryons or you know about you know comparable to the energy density in the lightest neutrino species so we've got arguably seven components all of which have the same energy intensity within a factor of a thousand and why those end up - and those are varying with time in different ways why they all end up to have the same energy density we don't know you know we think the physics that sets the Dark Matter abundance the baryon abundance the neutrino mass isn't you know those parameters are not obviously connected to each other so maybe this is telling us something profound or maybe it's telling us there are many different stable particles that contribute to the universe and if you have enough of them the energy densities all end up being in the same ballpark I don't know I've always been I've been intrigued by those numbers that's the baryon in dark matter density they're what that doesn't make the you know the the baryon abundance is not set by thermo say ssin the Dark Matter abundance well we don't know but if it's a wimp like dark matter its abundance is not set by thermalization but by freeze-out the neutrino abundance and the photon abundance being similar is set by thermalization but the energy density of neutrinos is set effectively by what the neutrino masses and that set again by different pieces of physics but the baryons well the bet if the baryons were just thermalized and there was no barrier Genesis then we would have some tiny amounts of baryons the final parameter I want to talk about the Hubble constant and that's because there's a lot of interesting discussion happening right now about what let's go what is the best value of the Hubble constant and what's going on with that I it doesn't really matter whether the Hubble constant is 67 or 73 but what we would like is experiments to have consistent measurements and right now we have three or four different ways of accurately measuring the Hubble constant one which we've talked a lot about is the microwave background gives us a value around 67 the second which we'll come back to and talk some about is using the large-scale structure and galaxies or in the gas the baryon acoustic oscillations that also gives us a value about 67 and the final techniques are classical astronomical techniques and we'll talk about that in a few months but these classical techniques give us values like 73 and in some ways this looks like a pretty good agreement when I was a graduate student people argued whether the Hubble constant was 50 or 100 so seeing it converge to 67 or 73 where is significant progress but if we believe these error bars this is not consistent so what's going on here well first let's talk about what people actually measure so we've seen how we measure the Microwave Background fluctuations now we're basically using the angular scale of the hot and cold spot to measure the distance to the surface of last scattering that gives us an interval measurement to the Hubble constant ah let me remind you that the angular diameter distance is the interval over DV of H of Z so the Microwave Background is giving us this interval measurement we see something similar when we measure the fluctuations in the galaxy distribution this is a plot again of the galaxy correlation function weighted by distance squared versus distance squared this is a very on acoustic peak by measuring its location I have a ruler that's the ratio of the sound to rise a distance to the angular diameter distance and using different surveys over a function of redshift you can measure these distances the line here is the best fit Planck model and you can see the baryon acoustic ops of oscillation observations are consistent and what they're measuring and actually in many ways what's coming into the microwave background positions it basically loops no I do not want to install Java right now um even if there's an update available the arm what you're measuring when we're looking at those the baryon acoustic oscillations or we're looking at the microwave background it's basically a ratio between two numbers how far a sound wave can move right so that's going to depend on the sound speed and what the we have to include an expansion so that with integrate DV over hoz we want to integrate that from the budget whenever we generate the found way the very early times - when decoupling took place so this gives us how far the sound wave could move the other thing we measure is the angular diameter distance between zero and the redshift where we measure it and when we look at the Microwave Background fluctuations we're measuring this at Z decoupling when we're looking at the galaxy surveys we're measuring these at much lower riches it's useful to look at these equations because if we're seeing a discrepancy between the Hubble constant we measure at redshift zero and what we infer from observations of microwaves background or baryon acoustic oscillations what it says is we've got to change the behavior of H of Z and that means changing our assumptions about the evolution of the density of the universe this again shows the current state of play with the data a bunch of different microwave background experiments they all show the same pattern if you want to be consistent with the CMB observations you want to value the Hubble constant you know close to 68 the local measurements are giving us values like 72 they're not really consistent with it how do those local measurements work well there's a whole bunch of different approaches to measure the Hubble constant using a lot of different astronomical techniques and I put this plot out to show you that long that'll focus on one particular approach there's actually a bunch of different ways in which people test figs and the one that I want to focus on here is one that I think in many ways is the cleanest we begin by measuring parallax distances to stars let me remind you how parallax works we can all do this experiment right now hold up your hand like this with your finger up come on this is our one chance to do experiments here close your left eye close your right eye you'll notice your finger moves relative to the background this is the classic way astronomers measure distance we know the radius of the Earth's orbit we observe the position of a star in the sky in March and September we watched the star move against the background with the European dye experiment which is now reporting its results and put out its first results in November we're now measuring these angular displacements with accuracies of water 10 microseconds that gives us accurate distances out the scales of about 10 to our sex so we can now measure distances to stars directly throughout most of our galaxy that lets us measure the distance to a class of stars called Cepheid variables Cepheid variables have been studied for over a hundred years beginning with classic with classic work by Annie jump cannon she showed that these stars are variable stars that have a very straightforward relation between their period their brightness oscillates with time on periods of a void or a couple months and their luminosity so if you measure its period you know its intrinsic luminosity we can calibrate this relation by using our astrometric measurements right we can use our parallax measurements to get the distance to nearby Cepheid x' determine their luminosity then observe sethius in nearby galaxies and this has been done a lot by the hubble space telescope and that gives us the distance to those nearby galaxies that gets us to this local group with the Hubble constant we then observe supernova explosions in these nearby galaxies and for supernovae once you measure its light curve you know its intrinsic luminosity so we can now calibrate the intrinsic luminosity of supernovae and then take the next step in the distance letter and measure the distance to distant galaxies and that's the approach that gives us a value of about 73 so what's going on here there's either some systemic in the classical astronomical techniques and [Music] you know they've had a hard time achieving the precision of microwave background measurements so that's certainly possible or there's some systematic in the microwave background experiments and that's something we best check by making more measurements what we're doing right now we get one second is we are making temperature and polarization measurements with higher angular precision with the act experiment we actually have the data in hand now what we hope to achieve we're hoping by the end of the year we'll finish our analysis but things tend to take a little longer is we will have an independent measurement to the Hubble constant using the polarization data that will either be 68 or 73 or something else where blinding ourselves from it so I don't know what the number will be so we'll we'll check the CMP experiment or new physics yeah BBN BBN doesn't really probe H naught of probes baryon density BBN gives us a measurement of a mega bearing in each square so though so the Big Bang nucleosynthesis is sensitive to the Hubble constant at redshift of 10 to the 10 we know how to evolve it to today if we knew the composition of the universe so let me remind you actually when I raise the board at this point I raise this and switch to the white board alright I'll just leave I don't know the Hubble constant is equal to 8 pi 0 times where Rho is with some of the dead seen all the different components so this is the density and photons that's going is a to the minus 4 this is the density and all the relativistic species anything that's relativistic so that's going as the number of neutrino species times T neutrino to the 4 so it's going is a to the 4 4 where's 4 if the universe if the standard model was all there is the n nu is 3 it the early universe was a wonderful accelerator so one possibility is that other light particles that we've not yet identified were produced in the early universe if they're decoupled in earlier time if they did so they would contribute to n mu so theoretically and new could be greater than 3 this is the baryon density that goes as a to the minus 3 as does the matter density these two abundances we've actually measured pretty well so this these compete as we know how they behave we've measured the microwave background temperature so we know this one well so this is uncertain and what's also uncertain is the behavior of the dark energy we often talk about the dark Energy's behavior in terms of its equation of state W where its behavior goes as 8 - 3 times 1 + W where if W is equal to 0 will behaving like matter if W is equal to minus 1 this is a vacuum energy or cosmological constant so this is the equation of state of dark energy within the context of the things people like to play with this is not known and this is not known so what games people have been playing as ways of explaining what's going on potentially is suggesting maybe they're more like neutrino species are maybe w's differs from one changing the number of light neutrino species also has the effect of changing when we go undergo the transition from being dominated by radiation to being dominated by matter and if you recall when we look at the transfer function the transfer function for the dark matter when is 1 over K squared over K at following squid to the minus 1 if I change this ratio between these guys and these guys that changes K equality that changes the amplitude of large-scale structure and if I make this up to say three point four three point five which does help reduce the tension but does it changes the way I interpret the microwave background and large-scale structure data because Rd is CS DZ over H of Z if I change H of Z this makes it changes our D and brings the C and B in Bao in to align with the H not measurements but if I fix things there that changes the amplitude of large-scale structure and makes the amplitude of galaxy fluctuations noble we're consistent with the amplitude we infer from the microwave background so that changing things that way doesn't seem to help the other possibility is to change the angular diameter distance and you need to do that at all and relatively late times right is the angle diameter distance say for the baryon acoustic oscillations just the distance from here to say redshift point 2 that means changing things when the universe is dark energy dominated if you want to fit the data you actually end up with W less than minus one that's a very funny set of properties for the dark energy that implies that the energy in the vacuum is growing or with time it implies that the universe's rate of acceleration is growing if this continues the universe accelerates ever faster eventually accelerates so rapidly tear apart galaxies and then every atom and then every nucleus gets torn apart this is called the big rip it's pretty awful and both its theoretical implications for underlying physics but for our ultimate fate I was interviewed to guess about this by New Scientist around November 1st what I told them about the big rip was that I viewed it somewhat similar to the Trump residency as seen from November 1st physically possible frightening to contemplate but not likely to happen based on my predictive abilities I fear that the dark energy made universe may have W less than minus one however on here we actually have some observational data that seems to contradict this because we can make measurements of angular diameter distance or luminosity distance using supernovae and when you include that data the values of W you need to solve the discrepancy between the observations and theory don't seem to fit the data so we're in an intriguing place with this these H naught measurements we don't seem to have an obvious theoretical way of you know obvious way of changing the theory of having an obvious tweak to the model we have and adding an epicycle that would make these two measurements consistent um I think the next you know there's sort of next steps for everyone involved for theorists I think we need to think about are there other ways in which we could relax our assumptions of how we interpret the data ways in which we you know things we assume about the evolution of the universe that might make it possible to make these two sets of observations consistent for the experimentalists working on microwave background they need to measure it yet again in different ways and for people doing a classical astronomical tests those need to be checked in various ways for self-consistency the dire results are being released in a series of data releases the next one will happen in April when that happens we'll be able to get direct astrometric measurements to a much larger number of stars that will hone those measurements if I had a bet I would bet at this point that the classical astronomical distance ladder is off by a few percent and when it's properly calibrated everything will fit together nicely but given my previous success of prediction I'm not sure I wouldn't guarantee that's right the question there so when we talk about the error estimates for a given cosmological parameter we do that in the context of a particular theory so we say let's assume three neutrino species and W equals minus one and then predict what happens so those values you saw have those air reports the game that people then like to play is let's relax that and make models where I add extra neutrino species and there dozens of papers in the literature where people take the data all the data is public and all the codes are public so it's actually pretty easy to do the play this game at home allow extra neutrino species allow the Hubble constant to vary and you'll get an error lips to the data that might look like this that the microwave background data alone if you add extra neutrino species allows the bigger value so you can either quote the error here or here the problem is or the good thing is things are connected so if this might be my group sorry combination of combining C and B and Hubble constant measurements or then turn around I'll just you know it's easier just to walk here and look at the amplitude of galaxy fluctuations versus the number of neutrino species where Sigma 8 is a conventional way to describe the amplitude of galaxy fluctuation sexually while these old historic definitions you draw a sphere of eight mega parsecs in radius you look what is the RMS fluctuations in the amplitude of matter within that sphere it's where it turns about to be the right size to get the amplitude of fluctuations on the scale of clusters so here's our C and D Y prediction that's the say 1 & 2 Sigma contours my cluster observations sit around here they want the amplitude to be lower given the number of clusters weaves we measure and observe they're kind of consistent with the standard value but they're already a little low if you add more neutrinos you make things inconsistent so this is the you know how people relax the model uncertainties all this assumes that the errors in the experiments are done work correctly and that's why you want to keep checking them with what with multiple experiments well oh three point five is about here well pointing five right there yep yeah I'd be worried I mean I thought you know I usually have a strong prior that you know based on a null energy condition you know that W is greater than minus one I mean that's that's my personal theoretical prior I'm willing to you know consider other models but you're violating them but you know you don't like to violate the null energy condition all right I mean you or I feel one of my jobs as a cosmologists is to work to make sure that these deviations either go away or turn out to be five or six sigma so you know it was you know a number of years ago I think you know cosmologists were able to say with some confidence that we think there's dark energy we think the universe is accelerating we have pretty convincing evidence for that took some work to get there our goal now is that I can you know walk over to Noddy and say that no energy condition forget about it the universe that seems to violate that that be if something that you you know I don't want to say that until the data is really compelling and data isn't compelling yet I don't think I don't want anyone to take away that this W is less than minus one but we do have an interesting moment where you know it could usually turn out at this point that these discrepancies go away is the data improves that's usually what happens but every now and then discrepancies get worse and point to new physics and we don't know which way things will go yep yeah yes so that explanation I think can be observation we ruled out so the idea here is were at a special place in the universe that were surround that the density say going out to maybe read should point to around us the mean density here is significantly less than the average density the universe so that local measurements of the Hubble constant differ from the large-scale measurement you know there are two reasons to worry about this one partially theoretical and that you can say what's the amplitude of fluctuations is this required on these scales and it's really large right it's you know order unity or order can a couple tenths of fluctuations on scales of a few hundred megaparsecs and we know what the amplitude of fluctuations are on those scales from measurements the microwave background and large-scale structure and we would have to be living in a very unusual place in the universe but you could actually test this more directly by using what's called the kinetics and EF sold over to fact so if I have a cloud of electrons and I've got some microwave background photon moving through that cloud of electrons towards the observer that cloud of electrons is moving towards the observer it will scatter those photons and if it's moving towards me it produces a hot spot two moving mirror moving away from me it produces a cold spot and the amplitude of that effect goes as the optical depth from that cloud times the velocity nature provides us with lots of these clouds of gas that called clusters of galaxies this effect here is called the kinematic some years old over traffic or stuff often written is KSP there's a second effect called the thermal thing us old ovitch effect so here a photon gathers off hot electrons and gets scattered up in energy and as a change in energy of order k te over any C squared times tail what this does is since it scatters things up in energy it takes the planck distribution since everyone scattered off an energy at low frequencies it produces a cold spot and high frequencies it produces a hot spot so at low frequencies which actually below the peak is where most microwave backward observations are made it actually casts a shadow against the microwave sky proportional to the optical depth so we can actually measure in this way the optical depth we know the electron temperature also from x-ray observations so we met can measure the signal from clusters we can then turn around if this model were right relative to the microwave background all the clusters of galaxies in the nearby universe will be moving relative to the microwave sky so we should have a very significant ksz effect if we're living in a local void and we've looked for it oh there's a book for it we don't see it so based on that and these normalizations we can rule out the possibility we live at a local void significant enough to explain the Hubble constant discrepancy yeah so supernovae are being used in two somewhat related ways for cosmology when we look at using the supernovae to calibrate what people call the Hubble diagram which is the relationship between luminosity distance and redshift or equivalently get you the supernovae were first famous for the fact that you have measurements out here at redshifts of 1/2 and greater that we're showing deviations from what you expect from a universe that the supernovae signal and focus and lastly distance we're systematically dimmer than you'd expect in Einstein's sitter Omega matter equals one universe and this is the measurements back in 1997 that you know led to the observation of cosmic acceleration 2011 Nobel Prize when we're looking at measuring the Hubble constant in the local universe we're actually relying on observations of supernovae here at very doe within the nearby universe redshifts less than 0.2 this is very recent times so there's very it's not likely for supernovae evolution to play a significant role yep I yeah I'm actually not familiar with the claim of variation in Jeep in variation G over G well you want to change looks like I said we raced it yes you can if you change G and actually don't know let's see right remember what we're measuring is this and we're measuring quantities like that so if you if you change G you'll change HSV and if you what we'd like to do to fit the data is make the the value of the density of the universe of the dark energy larger today than in the past we want that W less than minus one goes the right way so equivalently if you made G larger today than in the past that would go the right way so potentially yes that could be the explanation I'm actually not at all familiar with any experiment showing it but it'd be worth thinking through some of the other things that does if you change G one of the things that I would check the sticking it's built atop my head is remember that Brad's where pi is 4 PI G Delta Rho where these are the density fluctuations so if G was changing with time that changes the gravitational potential with time Microwave Background fluctuations moving to a time-dependent gravitational potential experience would experience a change in temperature the thing I would check for I have no idea whether this would be gives an interested constraint is this would predict the correlation between temperature fluctuations and density fluctuations due to time variable G and you could stack on galaxy positions and look for that effect so that that would be how I would one way I would think about checking for it on time variable G does all sorts of fun things it changes properties of stars right because a star is basically in a pressure equilibrium between gravity gradient gravity making it collapse and pressure trying to make it expand so you change the central temperatures of stars which change the way stars evolve so it's tricky to vary G yeah yeah so I'll close actually on that by mentioning if you haven't seen the movie Bull Durham I strongly recommend it it's about American baseball small town there's a great team in Bull Durham in which the sort of older baseball player who's the catcher is to explaining this young pitcher breaking into the major leagues that he needs to know his basic quotes for interviews when he's being interviewed as an athlete by a reporter and he's taught phrases like I'm just happy to be here I just want to help the team right and all these quotes people say overall the quote that I always think of that when I hear questions like this particularly from the press when people talk about results on the kevsham claim like time variable G and the response you get together being interviewed by the press is that's important if true alright so I'll see you on Friday [Applause]
Info
Channel: Institute for Advanced Study
Views: 3,115
Rating: 4.9166665 out of 5
Keywords:
Id: puIIAAGEwVM
Channel Id: undefined
Length: 92min 45sec (5565 seconds)
Published: Wed Jul 26 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.