I'm excited to be able to come here.
I really wanted to come here first after the announcement, you know, as my new home.
I'll be here soon so I'm really excited and I'm really excited that you all
are here and excited too! Before I start, I wanted to emphasize that
this was a huge team effort and I know right now in
the media there's a lot of stuff going around like I single-handedly complete the
project but that's as far from the truth as possible. So I just want to
make sure that everyone knows from the beginning that this is the effort of
lots and lots of people for many years. Wow
(lights adjust) maybe that's good Oh I also want to say I've been busier
than I thought these last two days, so this might be a little bit more casual.
Okay, so you know if you go out tonight you might see
the Virgo cluster or constellation. And if you zoom in towards the head of Virgo
there's actually this giant elliptical galaxy called m87 and it's 55 million
light years away and if we could zoom in very far towards the center of m87 with
the radio telescope UAD we would see these jets, these flailing arms of a jet.
And what this jet tells us, this is at the heart of it there's what
is a supermassive black hole. So a place where nothing can
escape not even light. And although we haven't never been able to see this
black hole, before we see the effects of it in the jet. And what we are trying to
do with the event horizon telescope is see something as small as that little
dot there. We've been trying to image the core of the the black hole, the
immediate area surrounding that black hole. We believe that if we
were to zoom in we would see light that was that was dipping around and bending
due to the immense gravitational pull of the black hole.
So, if Einstein was right with general relativity, this light would bend
itself into a ring, in which case you would have a dark spot in the center.
The brightest area of the ring is called the photon ring, where you have
photons that are basically orbiting continually or near
continuous orbits. But anyway this black area here is called the black hole
shadow. This is what's referred to as the black hole shadow in simulations of
the turbulent.. I'm sorry, that black hole shadow tells us about general
relativity through its size and its shape, and so we'd expect for certain
spins and masses that that would define what that black hole shadow looks like.
So simulations of turbulent plasma in the jet's accretion disk around the
black hole predict that we would see this kind of infinite resolution image.
You can kind of see here where the gas is just flowing around but you have this
bright ring to center. So, you know in 2017 we hooked up an earth-sized
telescope and two years later we produced this image of the black hole and m87.
We were really excited to be able to show these results on Wednesday and
today I want to tell you more about the experience of making that
first image of the black hole. What makes it so hard? What did we do? - how did we reconstruct it?
How do we verify what we reconstructed? And also, what did we learn?
Okay so the question of what makes it so hard - I mean maybe if you think about Hubble,
you think this is a really high-resolution image. But m87 is a
galaxy 55 million light-years away from us. It's so small that even Hubble
barely can see that jet, that big booming jet. That's a like galactic scale, and so
people have been trying to zoom in to m87 for many many years, but seeing the
shadow requires that we have a really particular kind of telescope with the
right size and the the right observing wavelength. So for a lot of radio wavelengths,
you can't go for shortwave, you can't get down to that event.
If you have too long of a radio wavelength
you can't get down because the gas around it is optically thick.
For instance, here is a three millimeter simulation - I mean a simulation of what
you would see at three millimeters - and as you continue to go down, reducing the
wavelength to around one millimeter, this gas around it sheds off and you are left
with this this ring of this event horizon that we would expect to see.
This ring is really really small so it's about 40 micro arc seconds in size, which
is about the same size as if you were trying to take a picture of an orange on the moon.
And just to put it in Hubble terms, here's a one pixel square of Hubble
and here it's going to show you what the size of the image that we
created is. So it started with one pixel of the Hubble telescope. So if we plug this
wavelength and required angular resolution into our equations of
diffraction, you can just easily plug this in and you can see - okay the
telescope size that we that we would need is the size of the entire Earth.
And so if we could build an earth-sized telescope, we could just start to make
out this really distinctive ring of light that's indicative of the
black hole's event horizon. But building a single dish telescope the size of the
earth, you know, is impossible. So, by joining telescopes from around the world
I've been working as part of this international collaboration called the
Event Horizon Telescope, which has built a computational telescope the size of
the Earth. It's the first one capable of resolving structure on the scale of a
black hole's event horizon. So, joining telescopes in this manner is a
term called very long baseline interferometry. And in VLBI, or very long
baseline interferometry, all the telescopes in the world wide network
kind of work together. They're linked through the precise timing of atomic
clocks, and teams of researchers at each of the sites basically freeze
light by recording petabytes of data. Then we ship all this data together
and the computers process the data together to act like a big earth-sized lens
to make the picture. But, you know, how do we actually make a picture from
disjointed telescopes like this? Well, like with a regular camera, in VBLI
we don't actually capture a picture in pixel space, but instead in
frequency space. So we essentially take measurements of the black hole images, and
Fourier transform them. And if we put telescopes all over the globe everywhere,
we would sample every point on this Fourier transformation, and then it would
be very easy to make an image. But since we only have telescopes at a few
locations we only get a sparse number of measurements. And it turns out that for
every two telescopes in our telescope array, we get a single measurement that's
related to the 2d spatial frequency between the telescopes. And so the closer
that two telescopes are together the the smaller the spatial frequency is
and we're going to measure large spatial structures and so to measure that fine detail that
you need to see that precise ring we need to put our telescopes really far apart.
But the EHT only actually has eight telescopes that we
observed in 2017 with, at six different locations. So that's actually
only six-choose-two, 15 distinct frequencies that we can measure it at
every time and that's pretty small number. But fortunately as the Earth
rotates we obtain other new measurements. So since the baselines between those
telescopes changes as the Earth rotates this amounts to carving out different
elliptical paths in the frequency plane. And this is the UV coverage that
we had for this in the 2017 observation Okay, well, how do we even get these measurements?
of m87 for one of the nights. I mean basically we have this tiny tiny
little signal riding on a huge amount of noise. And so we get it first by
recording hundreds of terabytes of data at each of the telescope sites.
So much data that we actually have to fly it - which is very hard
when we're collecting data at the South Pole back to a common location
We have to wait for their winter to be over. we use this special-purpose supercomputer
So, then at that common location called a correlator, which combines the data
using the precise timing from those atomic clocks. So we make sure - because we
really need to know that time delay between the signals. And once this is done
this is then passed on to a calibration stage, which tries to find a
weak signal hidden in that correlated output by by solving for things like the
absolute phase of a single telescope over time. And this is able to turn
a weak signal into a stronger signal. Developing this calibration pipeline
was unique. Although these ideas have been around for a while
developing it for the short millimeter wavelengths that we had to work with the
EHT was a huge project. And I just want to call out Lindy Blackburn, who really spearheaded this.
And also Maciek, Sara, and Michael who also were really instrumental in
getting this part working. If it wasn't for this we would have no data to make images from.
Okay, so at this point, we have the data, and then we can
abstract away basically all the astrophysics of the problem, and kind of
just think of it as a purely computational imaging problem.
We have sparse noisy data and our challenge is to find the image that actually caused it.
As I said if we had measurements everywhere, if we had telescopes all over the globe, we would
sample every point on that frequency plane and this problem would be really trivial:
you would just simply need to apply the inverse Fourier transformation.
In the case that the data wasn't noisy but because we only have a few samples,
that means that there's actually an infinite number of possible images that are
perfectly consistent with the data that we do measure.
And so, how do we actually deal with this? Well, the traditional method that has been
around since the '70s is a method called "clean." "Clean" kind of works by
assuming that the data is really sparse and it puts a zero everywhere where we haven't observed data.
And then by simply applying the inverse transform
on these measurements, the method obtains in a
very noisy artifact-heavy reconstruction It doesn't really look like
the original image at all, but it kind of has somewhat the same shape.
But at this point and it says "Oh, how do i clean up this image?"
the method kind of throws away the data that the underlying source is
It does that by assuming just a bunch of point sources.
So it interatively searches for the brightest point in the image and then removes the
artifacts that would occur due to incomplete sampling in the frequency domain.
And then this image, after you found all these point sources, is then
blurred to merge the points into an extended source. So, as I mentioned, this is
kind of the default method used to solve these problems.
And this method actually works really pretty well out of the box, when there are a lot of
telescopes and when you're observing at longer wavelengths where you can really
calibrate your data. But, for the short wavelengths that
the Event Horizon Telescope operates at, and for a small number of telescopes,
this method starts breaking down. The method, in the cleanest sense,
I guess, starts breaking down. The reason why? There are a couple reasons.
One of the primary ones is due to the atmosphere.
The reason VLBI is able to work in the first place is due to the fact that light from
a black hole, you know, it's going to travel 55 million years and then it's
going to reach the earth as a plane wave. And it's going to reach one of the
telescopes slightly before the other one and this time delay is really key measurement that we use for imaging,
for extracting that 2D spatial frequency. But, however, the atmosphere causes random delays
in each of the signals, which leads to a completely random phase in our measurements. In addition to that, the atmosphere also causes
different attenuation factors in the signal. I mean, you're going to have
different kinds of cloud cover above Hawaii than over Chile, and so you're going to
have a different acceleration so you are also going to have a different
absolute gain term. And on top of all of that the measurement function
that we have can also have problems with these gains due to things
such as pointing errors and being out of focus, having astigmatism, or just
problems with the electronics. And it turns out that for the EHT, these were
actually particularly a problem at the LMT - which unfortunately I was at.
But this telescope was observing, actually while it was still being commissioned.
It wasn't completed yet, and so there were a lot of things that on a
regular telescope that you have to allow yourself to for instance point your telescope.
And instead, what we had to do is we kind of had to come up with
things on the spot. So what we did instead is we just raster expand the
telescope every time we wanted to try to point we just raster scan the telescope
and we get these terrible total power signals and then we have to figure out:
"okay, in this image, where is the source?" (laughter) Yeah, pretty bad! Then we came up with this
match-filtering algorithm that allowed us to do this. We did it, and we were able to point
fairly well at a lot of the sources but for source as weak m87,
it was really a problem, and so we had pretty terrible pointing and as we found out
later on the gains of our telescopes were really bad. So the gains should be
pretty close to one - just jittering around one - and for the LMT they were
just off by like almost 100 percent sometimes. So it was a big challenge to
deal with this data. So you know for all of these problems
that errors turned out to be kind of bad. I mean, you have.. If you look at it
it looks like we have no phase information and no amplitude information.
So, what are you supposed to do? If you look at, if you try to do something with that
and you try to take the inverse fourier transform. looked somewhat like - in the simulated example -
Remember before it somewhat like the image on top.
But here it's all scrambled so it's very hard to figure out what to do.
But if you notice actually these individual terms the phis and the gains "g" here
actually are station-based while our measurements are pair-based.
So for instance, if we had added a third telescope then we would share
between that third telescope and the second telescope some of those same gain
and phi terms, and so that allows us to solve basically for a smaller set of
calibration terms basically during our imaging. And so we've had to develop two
classes of algorithms that we then explored in our work in order to deal
with this these particularly bad calibration errors. And I'm not going to
actually go into the details of these but I'm going to give you the flavor of them.
So the first is inverse modeling which is based upon the clean algorithm
that I talked about. But we really wanted to.. yeah so anyway.. basically in
this clean algorithm it works as I said, except you can't do the clean algorithm
normally when you have all these crazy phases and gains. So what we do instead
is solve for an image and then you fix that image and you solve for the
gains and your calibration terms that would best fit your current image.
Then you iterate back and forth, and this is really a good algorithm for us
to have used because.. a couple of reasons. Mostly I think because
it is the traditional algorithm that
is used in radio interferometry. We needed to make sure that -
just because we came up with new methods - that older methods would still be able to
get the same thing. But a disadvantage of this method is because it is really
solving for an image and then fixing that, it can get
really stuck in loco minima and so it has a lot of guidance from
knowledgable users. And I think I skipped it, but actually you usually have users
put down little boxes called clean boxes of where you're gonna put it.
So it's guided a lot by the human, the user who is running the algorithm.
And so, a second approach in methods is something we've been
developing more recently. And I've been doing this primarily with - there's a
number of people - but primarily with Michael Johnson and Andrew Chael and Kazu Akiyama.
We've been developing methods that take a more Bayesian
kind of approach to the optimization problem. So in this problem,
we're not just trying to find some sort of inverse function that takes us directly
the measurements, but instead we try to find a picture that fits the
measurements and is likely under some kind of described function
of what is a likely image. And then we kind of use some sort of
gradient descent approach to solve for the image. So the disadvantage here is
that we have to define what is a likely image. You know, we have to impose some
sort of information that can bias our image, just like how the human could bias it
in the clean method. But the really big advantage of this method is
that we can incorporate these different types of areas that we'd
expect in our likelihood term. And we do this in a couple ways, but the main way
that we do it is by incorporating what are called closure quantities.
These are quantities that are actually invariant to these calibration terms, so
in a closure phase, if you take three telescopes in a closed loop
and you multiply the visibilities - so it's complex visibilities - you're
going to add their phases. And these additional effects due to the atmosphere actually
cancel out completely and you're left with the term that is the same as if you
didn't have any atmosphere. Similarly, in something called a closure amplitude -
if we multiply and divide the measurements of four telescopes in a
certain order, we obtain a term where the gains cancel out completely and we're
left with the term as if the gains were all one.
So in both of these closer quantities, these were not developed in the last couple of years,
they've been around for a while. But they were always around mostly for
calibration purposes. So when you're calibrating the data before
imaging - and here we tried to put these directly in the imaging process.
So we do calibration at the same time as imaging, and so what you can do
is have methods where you don't need any calibration whatsoever and you
can still get pretty good results. Here on the bottom...
At the top is the truth image and this is simulated data as we're increasing
the amount of amplitude error. You can see here - it's hard to see but - it breaks down
once you add too much gain error. But if we just close our quantities
we're invariant to that. This has actually been a really huge step for the project
because we have such bad gains. I'm kind of trying to give
a glimpse of what we do for methods, but I think one of the most
interesting parts of this project is how we make sure we're not biasing our
images too much. We have this really sparse, really noisy data. We have to
inject something into the problem. We don't want to inject something that's just
getting us back to what we we expect to see. So how have we gone about
verifying the images that we have? An old technique that I had talked about
a while ago is it you could take many different types of images and every
different kind of image has its own statistical properties. And one idea is
"okay, well, we don't know what a black hole necessarily looks like so
let's impose the image properties of many different kinds of images,
and see if we change the type of image that we...
the image that we reconstruct." And this was done by
splitting up images into little patches and imposing the statistics of those
patches on the reconstructions. And what we found is that if we had enough data
it didn't matter really what kind of image - you could have all images of dogs,
or all images of buildings, or all images of things from Hubble,
and it didn't matter - you could get the same image if you had enough data.
So, this was an idea that we kind of pushed forward, not through having the patches,
but by saying "okay, let's try to figure out - can we impose lots of different
kinds of image assumptions and pose lots of different kinds of users and
make sure that when they're all independently done we still get the same
images in the end?" And so we did this through a four-step process. First is
through synthetic data tests, then blind imaging, then objectively choosing imaging parameters
without humans in the loop, and then the additional validation
of the images. So the first step was synthetic data steps.
So usually in VLBI we actually do have to develop simulation software
to realistically simulate what measurements from the EHT would look like
with all our different kinds of crazy noise in them. And this is
typically, you know typically hadn't been done before.
So doing this actually helped us improve our methods a lot. We actually we did
this in a number of different ways but one of the ways that we found
incredibly helpful was through the Event Horizon Telescope imaging challenges.
These imaging challenges allowed a way for methods to
blindly test themselves on synthetic data, to make sure that they could
reconstruct things even though they didn't know what the true image was.
This was an organized effort in the collaboration and what we did is we had
a set of people choose some sort of truth image and generate measurements
from our software and then we passed it off to the set of
these imaging teams. And each of these teams would produce whatever they
thought was their best image and they could use whatever software they wanted.
And then these would be passed off to a set of experts that would look at
the images and try to decide "Okay, what are the common features?"
"What do I believe what don't I believe?" "And do we trust the images that we're getting?"
And then the advantage of also having the simulated software is that we
could also look at the true image in the end too.
And we were able to, really as I said, improve our methods a lot.
I just want to show you though an example of how we found this was very useful,
not just for improving our methods, but for understanding our results. So here was a
result of one of the imaging challenges: at the top is the truth image that..
At the time when people are reconstructing data, they didn't know what that truth
image looks like at all. And here were five methods and so
here are five methods. And basically from looking at this you could try
to figure out what was the feature that you believed in. What was a
feature you don't believe? For instance you kind of had this crescent
feature in all of these images, but this tail was not in all of them so maybe you
are less confident about a feature like that. And this is what we found people would find,
how we would find artifacts. Another thing we did is we also tested
on random things. (laughter) Some people got mad at me, thinking it's a binary black hole or something.
And the reason we wanted to do this is we wanted to make
sure we could see something that's completely unexpected. You know,
in the last one people are kind of expecting you're gonna get this ring structure,
but you throw something crazy at them and see what happens! It was really
nice to see that all the methods - you know although some are better than others -
they all kind of recovered the structure and didn't recover just a black hole shadow. So based upon these these synthetic data tests
we kind of developed how we were going to approach the m87 data.
We wanted to avoid shared human bias, kind of like how we had done in these imaging challenges,
in order to assess common features among independent reconstructions.
The way we did this is that we split up our big effort of
people who develop methods, who are knowledgeable users of methods.
There were about 40 of us who either develop or are good at
using these methods. And we split them up into four teams: two teams which had a focus more on regularized maximum likelihood methods,
and two on the more traditional methods. Although any team could use whatever methods
they wanted. And then we also made sure
we had people from different parts of the globe interacting
with one another. So this was really truly an international effort.
And what we decided is "okay, we want to make sure that when we image we don't want to just
image one time and then all compare our images. We want to be able to make sure
that we're ready to compare, that before we show it we know
that we've actually fit the data." We developed a website that allowed
people to submit their images, and then it would provide a set of diagnostics
that we could then compare without actually seeing the images.
This proved quite helpful. We actually tested this all out before we even got to M87.
We kept the data separated from us and we worked on AGM
- active galactic nuclei - making sure that this procedure works.
Something that didn't look... You can see the image here a little bit doesn't
look anything like that shadow and we tested that this procedure would work
and that we would all converge on the same image in this way. So then after
working and practicing for a while we wanted to make sure we were pretty good at this.
In June of 2018, the M87 data was released - I remember it because
it was my birthday.. At this point we were actually working with
an engineering release of the data, but it was really nice. It was amazing.
I just remember seeing because you see this jump, which if you know what a
circle looks like in the Fourier Transformation, it's a bustle function and and
that was pretty amazing to see. Anyway we got this data, and then we said
"okay everyone go into your separate rooms." "Don't speak to each other," and
for seven weeks we worked in teams where we weren't allowed
to speak to anybody else. And this is the result. I remember just running into the room and we all press "go" on our laptops
at the same time. We had prepared our scripts so that no one person would
get the image first. As we watched the images appear on our screens
it was really amazing. This is what we produced on Team One,
the team I was in at the end of the first day. And then, you know, that wasn't enough.
We wanted to make sure: "oh where are those bright spots appearing?"
All this kind of stuff. So we worked for seven weeks, more than a month.
Then after that amount a time, we all got together at a workshop in Cambridge, Mass.
and once we felt like we were confident to show each other our images,
we all showed them at the same time. It was a really fun moment,
and this is what they all looked like. So, this was, I think,
the happiest moment I've had in the collaboration so far. Because I didn't know when you're reconstructing there's so many things
that are going wrong. And we were also working with the engineering release of the data at the time.
Just because you get something, you want to make sure
that everyone is going to get the exact same feature.
Although all the images look different, they all have this common feature.
It's a little hard to see on this projector, but they're all about
a 40-micro-arc second ring that's brighter on the bottom than the top.
That was really exciting to see. This is that first day we saw them all -
- the average of those all those images together. So even though we had done this
and we had done this whole blind imaging procedure,
that didn't mean that we still didn't have some sort of human bias in it. Just because we try to avoid shared human bias
doesn't mean that we weren't all thinking "Oh, we want to see a ring.
Let's make a ring out of this data." So then we showed those images
at the end of July, and then we spent the next couple of months basically trying to break our images. So the first thing we did is we try to objectively choose parameters. In a sense, a very weak kind of machine learning. But we wanted to do it in a way that we could do for things like clean,
which are completely separate from your traditional machine learning
kind of framework that we have today. So we develop three different imaging pipelines
based upon three different softwares. DIFMAP is a very old software
that was developed primarily around this clean imaging.
EHT-imaging and SMILI were two libraries written in Python that were
developed recently specifically to handle the challenges for the EHT.
In each of these, we chose a set of parameters that we basically wanted to
solve for. Like, "what is the best regularizer weight?" or "what is the best
initial gaussian size?" This is what we try to do, and we did it by
starting with a very small toy data set. But we chose this data set in a certain way.
We wanted to make sure that if you had, for instance, a disc
that if you trained on something like a disc, it wouldn't result in a disc.
you would still get that whole back, and stuff like that.
Oh and also we added large-scale structures like that jet
to deal with all kinds of other types of error that we also deal with.
The fact is that it's not actually a really compact source, but we chose these models
because they all reproduce that dip at the same point. They all look kind of
visually the same, and the visibility amplitude dipping - not in the phase
domain - yeah yeah and you can see - it looks similar to the true data on the top.
And so once we did this then we came up with a way of training on them. Then we just
tried to train on this data in order to find the best parameters to make those images.
So, for instance, we took a disc and so this is not the full procedure
but an example of one where we took a disc we generated this synthetic data
that with all those different types of noise and then we pass that through the
imaging method and saw what came out the other side. We tried to choose those
imaging parameters such that you would reproduce that image. You know, your
normal trading setup but then we transfer these onto the actual M87 data. And what we saw in this case
is even though we had trained it on a disk and had tried to choose
parameters such that it would best produce a disk that we still got that
hole in the center. So it's a nice first test that we needed that hole to match
the data, but in general we didn't want to just do it on one data set.
We did it on a number of these toy data feeds and choosing the parameters
that would best reproduce all of them basically on the M87 data.
This is what, for example, one algorithm, one day what we would get it. And so we saw
one set of imaging parameters for each method that was then applied on all
days of data so we had observed m87 for four nights and you can see every row is
a different pipeline and columns are the days so you can see how the results look different. It's a little hard to see in here basically you know all of these look different right? They all have different wanted to say "oh, what is consistent among them?
assumptions underlying them, but we What do we really believe?"
So basically what we did is - -and if you're not a familiar with imaging
you maybe start to over-interpret some features - things like this which we know
are features that commonly exist. So we wanted to find out - okay to show
everybody, without people over-interpreting things, what what can we show that's believable.
And so basically we've blurred them to a level such that
they were all consistent to normalized cross-correlation. Afterwards we could we could average all
these together. And that was the image... I don't know if I have it here - okay I guess I don't
But it's the image that we showed. [Laughter]
The one at the beginning! [laughter] Once we had these images, then the goal
was to try to validate them even further. So remember the first step of imaging -
- the blind imaging - we allowed humans to play their role in making the image, it's very hard to get something
that works right off the bat. because a lot of times with VLBI data,
We have a lot of problems - ones I didn't even discuss bad data and stuff and so
it's hard off the bat. But later on once the data had been improved...
We did this completely automatically even for things like DIFMAP
I mean "clean" which normally has a human specifically picking locations to put light.
We did it completely automatically and I think
that's one of my proudest moments, that I got "clean" to be automatic.
Anyway, we had a number of validation tests. One is that we we have four days of data.
So if you independently look at them, you use the same parameters on each day,
you can see this ring appears in all of them. So it didn't just appear on one day,
it appeared pretty consistently across all of them.
Okay, so that was a simple one, the easiest one to understand I guess.
But then we also wanted to test before we were just choosing one set of
parameters to show an image. We called this the fiducial image,
because it was kind of arbitrary. But really there's a whole set of parameters
that we think are reasonable. there's not just one reasonable set of parameters.
So what we did is, instead of just solving for one parameter-set permit method,
we try to solve for a whole set of parameters. And we did that by finding
"Okay when we did ran our imaging method if the normalized cross-correlation
between the true synthetic data and the data that we reconstruct is larger than
that of the true data and a blurred version blurred to the resolution of our
interferometer, blurred to the resolution of our telescope, then we said "okay these are acceptable
imaging parameters." But if the normalized cross-correlation was worse than that,
then it was just reconstructing a bad result. This allowed us to have a huge number of
parameters... oh I forgot to quote the number of parameters, but you know, hundreds of thousands of parameters
is what we were searching over. And we would get tens of thousands of parameters in this final parameter set. So here is a slice through the data through the parameter space for each of the libraries I'll show you. Here is on the synthetic data for crescent and here is on the true M87 data.
You can see the green boxes are showing ones that we had determined as a
a good-enough parameter to consider. And you can notice like at the bottom,
we get really terrible reconstructions. Just because this fits the data,
oesn't mean it actually can reproduce the synthetic data very well.
Maybe it wants to smooth out the flux as much as possible.
And we don't select things like that in the true data. And another thing
that I want to highlight with this one that I think is cool it's kind of hard
to see those labels here but actually at the top left corner this has no
regularization in this image apart from positivity and a field of
view constraint. So even like this data was amazing that you could get a ring
even with having just the constraint of a positivity constraint where
light you're saying is positive and can't be negative and some kind of compact field of view.
For the SMILI pipeline, you can see the top set parameters highlighted
in green and for the DIFMAT pipeline .
Once we had these we could do things like look at the fractional standard deviation
of our results and see "are we having such a crazy deviation
that this ring ever disappears?" And we find that the fractional
standard deviation was often small. We would find a significant sometimes
deviation around these knot regions, which we found had to do with
aliasing artifacts from our beam from the point spread function basically.
We did a lot of tests on this kind on these top set searches,
and you can look at the paper for more tests on it. But another thing we did
is the validation in those gains. Remember I told you that the gains were really bad,
especially for LMT. So just in general with that
calibration with the phase you included like an absolute random phase from 0 to 2pi. But for gains, for absolute gains, typically your values are around one,
if you've calibrated reasonably well. But as I said, we didn't for LMT,
which is a telescope in Mexico. So normally you can still weakly constrain
the absolute amplitude so you don't have to use that closure amplitudes that
I talked about earlier. And if you reconstruct the images...
So here is the reconstructions of M87 from the three pipelines. After we
reconstruct an image we can then solve for the best gains - that would best fit those images.
And we plot it here and then we did it for both m87 and
a different source called 3C-279 which is at AGN, where you can't see it.
You can see that the gains kind of roughly follow each other.
So that gives us confidence that we are recovering the calibration correctly.
Another thing though is I told you that we could use these closure
quantities to do the imaging so rather than
constraining the absolute amplitudes you can just constrain these closure
quantities. And if you do this, then you're completely calibration-free.
Then you don't care at all what the calibration is. And we did this for the
data as well. so here on the left is the image that we get where we
use closure phases, closure amplitudes, and some absolute amplitudes in the
process of imaging. And here's when we absolutely did not let it ever use any
calibrated data and we still got a ring. It's not as pretty of a ring, but it still got that ring out.
So that's really nice, that we weren't reliant on the calibration.
People were afraid of calibration because if it was wrong, maybe it's
leading us in a wrong loca minima or something. And then a final thing that we did,
and this is a very brief synopsis, is model fitting. So here we've
made an image. That was the goal so far. But also we want to extract some
parameters from these images and we also just want to verify if we fit a variant
for instance, a very constrained model... Here we allow every pixel to be different.
But let's say we only allowed crescents or rings. You know? "What best ring
fits the data?" and things like this. This was important to model fitting.
But first we just took our top set parameters that we had reconstructed all
those of tens of thousands of them. For each one of them we just mapped out..
We just found through really simple algorithms what the best fit circle is for all of them. And then from this
we plotted a histogram of what the best diameter of the ring is. And we could do
this for lots of different types of parameters. For instance, the asymmetry,
the contrast, all these different things. But here I'm plotting it for the
diameter, and you can see across all the methods the DIFMAP styling the
imaging across all the days they were really quite consistent with one another,
even though they were all independently done. So we were recovering this parameter
pretty well I think. We also did a model fitting directly to the
visibility domain. We didn't have this intermediate step of imaging and then
recovering a ring, but instead just fitting directly what's the best fit crescent.
We did it through MCMC-inspired methods. Um, developed by Dom Pesce.
You can see here this is once it's converged what it looks like.
You can find the diameter from this. It's about the same: 40-42 microseconds
so it was a different way basically you know
more constrained imaging but where you're really directly looking at those
those model-fitting parameters. So, the question is
What did we learn? Actually you know a large part of
what we did is... I mean It's really really nice to make an image
and it's really beautiful and amazing to see the first image of a black hole.
We also wanted to extract some science from it. So what what are the simplest things
you can extract from this image? What is the mass of the black hole? We know the diameter of the lens photon ring is actually.. There's a very simple equation related to it. You got this GM over distance
x speed-of-light squared where the M is the mass the black hole
and this 5.2 is like a lensing factor because the gravitational lensing makes the ring bigger. But the thing is this is only if
you are measuring that photon ring. But as I showed earlier, there's lots of kinds of stuff moving around the black holes depending on what the accretion disc looks like.
So you could have the ring appear much farther out if you had a lot of gas flowing around.
So we needed to be able to figure out what the calibration parameters were.
We did this by taking a huge simulation library. People from all around the world
collected their simulations and then we took a subset of that
and we generated synthetic data from it. And we did all our feature extraction methods
both in the imaging domain and directly in the
frequency measurement domain. Then once we had collected
a diameter we could compare with the true mass over distance value.
And then calibrate these two. What we found when we did this is that no matter
if we did image domain feature extraction, GRMHD - so this is model-fitting
directly to those simulations, or crescent model-fitting like I showed you
with that kind of MCMC-style method, they all recovered the same
mass of the black hole which is about six-and-a-half
billion solar masses. I also just want to call out..
There's a number of people on all of these stages, but some of the key people
who worked on this model-fitting and doing this analysis: Avery Brodrick and
a lot of people a lot of students.. but Paul I'm going to point out, Dom, Feryal, and Jason. Maybe question you have now is:
Did we prove Einstein was right? The short answer is no, but
we didn't prove he was wrong. He passed another test.
[laughter] I want to give you a sense of what we did,
what we could rule out. So, if you have a non-spilling-back black hole then you would expect a photon ring of
five-point-two times the Schwarzschild radius. Okay, backing up: for M87,
there were two measurements that people thought that the mass of it was. It could have been anywhere between three
and six or seven billion solar masses. So there was a huge range, and
the seven billion was from stellar orbits. The three billion was from looking at gas.
So basically the stellar orbits meant there has to be this much mass inside in this
region but it could have been smaller than that. So here we see... For this I'm showing the size for a
six point six billion solar mass black hole. From the stellar dynamics,
if it's non-spinning. But if it's spinning then actually that ring
shrinks a little bit. This is the region of photon rings that
would be consistent with a Kerr black hole. And this is if you believed it was a
smaller mass black hole - 3.5 billion solar masses. If you had a 6.6 billion solar mass
worm hole, you would expect a ring much smaller. And if you had a naked
singularity: a super-spinning black hole then you would have something... The ring would be the size of the radius, the event horizon. And we found that the black hole that we imaged fit really exactly on this 6.6 billion
solar mass measurement. So that means that it is very consistent with the previous measurements of the stellar dynamics. It can't possibly be a bigger one
- a bigger mass black hole - because that would require that you had measured something totally different
in the stellar orbits. So from this we find confirmation
with the stellar dynamics model. I guess it's our best scale to measure
the mass of the black hole. Another questions you might have is:
How is this different than "Interstellar"? I do have one piece of trivia that my friend told me,
although I haven't confirmed it. I've heard that "Interstellar" actually cost
a lot more to make than this picture. [laughter] Anyway, they mostly got it correct.
It's mostly really right. Although they did take a few artistic liberties.
They removed Doppler boosting. Doppler boosting is when the gas is moving
towards you it's gonna be brighter than when it's moving away from you. I guess they didn't like how pleasing that was
but we can learn something from this. At the bottom of the black hole is moving towards us - the gas is moving towards us. That's why we believe it is... Well there are a couple different explanations
of where it's coming from - an accretion disk or a jet -
but basically we believe that this stuff on the bottom is moving towards us.
And so from that we could get the spin of the black hole, the direction of spin. Another really interesting thing is that we
notice if you stack all the images together from the different days that
are independently reconstructed, you can see that there's
some evolution over the week. Although we didn't want to emphasize
this too much in our results - because we wanted to be very confident, we wanted to be very conservative in what we said - we don't know exactly where this
evolution is appearing in the image or what what's causing it.
We really don't know. But we know that it exists because
if you look directly in the data you can see that from April 6 to April 11th (and two other days around there) you can see huge evolution in the closure phases, which is
telling you about the structure. So we know that there is evolution, although
we're not confident enough in what that is. You can see the lines in our reconstructions, so you can see that we are recovering that change in the structure but
we might be recovering it in a bad way. So we've done a lot with m87.
We have the 6.5 billion solar mass black hole that we've got into image.
But actually a lot of people ask: How about Sagi Star? Sagi Star is the black hole in the center of
our Milky Way galaxy and is also another target for the Event Horizon Telescope. M87 is great. We got really lucky with m87.
It could have been three billion solar masses and barely like a pixel that we could resolve.
We got incredibly lucky. But Sagi Star and M87 tell us very different things. M87, although I showed that evolution, actually because it is so big, it's evolving very slowly, on a period of 4 to 30 days. Whereas Sagi Star
has an orbital period of 4 to 30 minutes. That means that over a night
you have a massive amount of evolution, and you no longer can make the assumption that a single image can describe all the measurements that you see in a day. So we've been developing methods
to deal with this. I want to mention briefly this is really important
for verifying the no-hair theorem, that we see this kind of evolution. The no-hair theorem basically says the space-time around a black hole can be
fully described by three numbers: the mass, the angular momentum,
and the charge. Charge, we don't believe will happen in these Astrophysical black holes. Masses show up very clearly. Angular momentum is really hard to tell from a single snapshot. So we've been working on recovering videos. How can we get videos from Sagi Star
rather than just still images to recover this? We are also looking
towards the future of adding telescopes actually talking with JPL and how we can
add dishes in space to fill up our UV coverage, our measurement space, so that
we can recover really nice videos of an evolving black hole that's
changing over the scales of just minutes. As you're reconstructing the black hole - over time as you see more measurements - this is what the image you reconstruct looks like. With that, I can answer questions. Question: You talked about the sparseness of the data. How many data points
were these images reconstructed from? It depended on the day, actually.
We actually observed usually in scans. And a scan is like a five minute
period of time that you're observing. Sorry, it's a complicated answer.
We record data and then this data is correlated. So initially the measurements are
less than a second long, and we have that throughout the whole night.
But by doing this additional data processing that I mentioned earlier, you can
coherently average this after additional processing. New methods were
developed in order to do this and we could average for an entire scan which
would be a few minutes. So on different days we had different numbers of scans. I don't have the UV coverage here for
everything but one thing that we found was quite amazing was that... The 11th and the 6th had the most scans - like 25
or 26 times during the night. So 6-choose-2 frequency samples and then
times 26 you know. But April 10th actually had only seven scans - it was super sparse - and we thought "there's no way you can get an image with
seven scans," But I don't know what we got really lucky though I think that's
really the night. So there was a variety of different data and actually
so we in the regularized maximum likelihood we usually use scan average data but the
clean method used 10 second average data so depending on the method
oh you mean like outside of the you're What we would do is we observe over the entire night or where however long we
can observe m87 but we also observed looking at other AGN between so it's
like that 3c 279 that I showed you earlier interleaved with that so that
we can compare things like gains and calibration parameters. Does that answer
your question?... When I observed, we would have
16-hour observing runs. We were working at 15,000 feet
so we would actually switch back and forth. So 16 hours continuously observing, but that doesn't
mean that all that was good data. Especially some sites were not able to
observe very well in the daylight. So usually our observing window was like
12 to 16 hours each night. Question: Imaging at these kinds of angular scales seems like it could be really powerful Are there are other classes of sources that you talk about where this kind of angular resolution might be interesting? Yes, so we are looking at other...
So like that 3c-279? We're learning a huge amount
from it. I don't want to say too much because they're still in the
process of publishing that. Not only is this the first black hole image,
it's the first image at this kind of wavelength. And different wavelengths tell
you different things about the sources and so we are seeing these AGN - other
active galactic nuclei that we can't see that event horizon is a
photon ring but we can still learn about the jet and everything. Then we're
also interested - if we go into space if you go a little bit farther out... for an earth diameter orbit
you will be able maybe to see other black holes. But right now from what we can do on Earth we can only see Sagi Star and M87. But if we go to orbit longer
than the diameter of the Earth, then we can start to see
some of these other black holes. Potentially. [Applause]
And then you have people complaining about the resolution, not realising the amazing feat!
Why is Andrew Chael not getting even 10% of the attention Katie Bouman is? Why is the media promoting only one woman when the EHT team has nearly 200 researchers?
ITT a bunch of people who are weirdly triggered by this woman.