Threats of global catastrophe
is normal news - climate change, pandemics, nuclear war,
asteroid impacts. "End-of-the-world"
talk is common discourse - it's no longer limited
to fringe fanatics and street-corner preachers. I've been like most -
recognizing the real issues. Skeptical of the
inflamed rhetoric. But, in private moments, I have
thought it odd that global catastrophe is now threatened
after only a few thousand years of human history -
an eye blink in an almost 14-billion-year old universe. Why "so soon" I've wondered? That's when I heard about
an odd scientific argument that claims to justify
end-of-the-world worries. It's called "The
Doomsday Argument" - and I cannot ignore it. What is the "Doomsday Argument"? I'm Robert Lawrence Kuhn
and Closer to Truth is my journey to find out. The Doomsday Argument seems
to rely on pure statistics, assigning grim probabilities
to human extinction - but it makes no reference
to real-world threats. The argument goes something like
this - "because we should be average human beings,
the extinction of human beings could be soon." "Silly," I'd think. But, leading scientists and
philosophers are discussing the argument - I'm taken
aback and I decide to pursue it. I begin with one of the
champions of The Doomsday Argument, author of "The End
of the World: The Science and Ethics of Human Extinction" -
Philosopher, John Leslie. John, you've written
about something called "the doomsday argument," which
is a new way of thinking about humanity and our
likely survival. This "doomsday argument" was
dreamed up by Brandon Carter. It's a very
controversial argument. Let me creep up on it therefore. There are, in the universe, more
suns than there are grains of sand in all the world's beaches. What would you think of somebody
who said life is going to exist all over the universe, scattered
all among these suns, but the existence of extraterrestrials,
why we haven't noticed it, should be explained as follows:
We are the very first intelligent species to evolve. Later, there will be
trillions and trillions of intelligent species colonizing
the planets, going onto colonize the galaxies. But, we are the very first. This would seem to me
a fantastic hypothesis. It would put us in such
an extraordinary position. I would prefer the hypothesis
that the reason we don't see extraterrestrials is that
they've almost always destroyed themselves as soon as they've
developed advanced technologies. Now, let's look at a
little variant on that same way of thinking. Suppose that we manage
to get off the earth and colonize our galaxy. The galaxy itself has
an enormously large number of stars. If we spread through the galaxy,
then you and I will have died before the spreading
takes place. If the human race spreads
through the galaxy, you and I will have been
enormously early humans in a statistically
extraordinary position. Can we believe in this? Isn't this like believing that
we're the very first intelligent species to evolve in
the entire universe? Which seems such
a stupid hypothesis. If the human race managed to
destroy itself very quickly, then something like one in ten
humans would have been alive at the same time as you and me. Because of the recent
population explosion, that makes us pretty average,
one in ten people. If we manage to colonize the
galaxy, that makes us in the one in a billion, billion class,
something like that. An extraordinary position. This gives a new
reason for pessimism. So, let's put that on the table. The statistical likelihood that
we are not this extraordinarily small early sample, of humanity. Which puts a spotlight on our
current situation in saying are we now vulnerable to species
destroying events, either our own cause or caused upon us,
that perhaps we are not aware of? Yeah, I think that what this
doomsday argument does is to magnify any risks which we see. If we're tending to take these
risks not very seriously, having looked into this argument, we
should take them much more seriously than before. And we also ought to take
seriously some risks which we simply haven't thought of. We ought to keep our
eyes open very carefully. The Doomsday Argument is
based on a simple premise: In any situation, unless there
is evidence to the contrary, we should always consider
ourselves average. If the human race would last
millions of years, and spawn trillions of human beings,
perhaps throughout the galaxy, then we would be living
in the very earliest stages of human history. But statistically, that would
seem unexpected - why would we be "not average"? One reason, of course,
is that somebody has to be 'not average'. But, the more likely reason,
goes the Doomsday Argument, is that we are average. Which would mean that we are
not living so early in human history, which would mean
that human history will not be lasting very much longer. My reaction, as with almost
everyone who hears the argument is that there must be something
wrong with it - how could such potent conclusions be deduced
from such flimsy premises? I saw that The Doomsday Argument
was developed independently in different ways. Brandon Carter was the first,
enriched by John Leslie. A second was a professor
of Astrophysical Sciences at Princeton, J. Richard Gott. How did Richard frame
the Doomsday Argument? Well, this started with a trip I
made to the Berlin Wall in 1969. People wondered how
long it would last. Was it a permanent fixture of
modern Europe or would it disappear in the near future. So, I was there with a
friend of mine, and I made the following prediction. I said, "Look, I'll use the
Copernican principle." It's the idea we use in
astronomy that your location isn't likely to be special. As we've discovered, we do not
live in a special place at the center of the universe; we're
going around an ordinary star in an ordinary galaxy in
an ordinary super cluster. The Copernican principle works
because out of all the places for you to be, there are many
non-special places and only a few special places, so you're
likely to be at one of the many non-special places. So, I made the
following argument. I said, "I'll here looking
at the Berlin Wall. I'm somewhere between the
beginning and the end of the Berlin Wall, and if my location
isn't special, there's a fifty percent chance I'm in
the middle two-quarters. And if I'm at the beginning of
that middle two-quarters, then there's one-quarter that's past
already and three-quarters in the future. So, the future is three
times as long as the past. On the other hand, if I'm at the
end of that middle two-quarters, I've got three-quarters of the
past and one-quarter in the future, so the future is
one-third as long as the past." So, there's a fifty percent
chance that you're within the middle two-quarters, you're
between those two limits, and that the future of the wall will
be between one-third and three times as long as its past. So, the wall was eight
years old at the time. So, I said to my friend, "Look,
there's a fifty percent chance it'll last at least
two and two-thirds years, that's eight divided by three,
but less than 24 years, which is eight times three. So, 20 years later, (laugh),
in 1989, I called him up. I said, "Chuck, turn on the TV. Tom Brokaw is at the Wall. They're bringing it down today. You remember those
predictions that I made?" So, it came 20 years later,
within the two limits. So, when this turned out,
I thought, well, I should write this up. So, when scientists write up
predictions, though, we like to be 95 percent correct, not
just fifty percent correct. So, how does this
argument change? Well, it says that if you're
looking somewhere between the beginning and the end,
there's a 95 percent chance you're in the middle 95 percent. In other words, not in the
first two and a half percent, not in the last two and a half
percent, but somewhere in this big 95 percent region. So, two and a half percent is
one-fortieth, two and a half percent is one-fortieth
of the total. So, if you're over here at this
limit, then this one-fortieth has passed and
thirty-nine-fortieths are still in the future, so the
future's 39 times as long as the past. On the other hand, if you're
way over here, still within the middle 95 percent,
there is one-fortieth in the future and thirty-nine-fortieths
in the past. So, in that case, the future
is one-thirty-ninth as long as the past. So, the 95 percent confidence
prediction is the future of the thing you're looking at will
be between one-thirty-ninth and 39 times as long as you've
been observing it in the past. Okay. So, I thought I'd apply this to
something important - the future of the human race. We've been around
for 200,000 years. That's back to mitochondrial
eve, that's our species, homo sapiens. Well, one-thirty-ninth
of that is 5,100 years. So, this says we'll last 95
percent sure we'll last at least 5,100 years, but
less than 39 times that, or 7.8 million years. So, we'll last somewhere
between another 5,100 years, but less than
7.8 million years. Now, that's very interesting
because it's calculated solely on our past lifetime
as an intelligent species, but interestingly, that gives
us a predicted longevity of between 205,000 years at the
short end, and eight million years at the long end; it's
quite similar to other mammal species that are here on earth. Their mean longevity
is two million years. And homo erectus, our
previous species, lasted about 1.8 million years. And the Neanderthals, they
lasted about 300,000 years. So, these numbers
are quite similar. And yet, the calculation's only
based on our past longevity. So, it should give us pause that
our past longevity suggests that we may be in as much danger
as these other species. ...perhaps we should take some
steps to propagate the human race away from earth. Well, yes. One of the things that you
notice is that the, we're having this conversation on
the earth (laugh). This is not good because if we
don't colonize off the earth, you and I are entirely typical. Everyone born would
be born on the earth. If we colonize the whole galaxy
in the future, you and I would be very lucky to live on the
very first planet when there were like a billion planets
that we colonized. So, it warns us that if our
observations are typical, there is a significant chance that we
would get stranded on the earth, and it's better to
have more locations. So, it's a great life insurance
policy for us to plan a colony on Mars because that
would give us two chances. And so, right now, I mean,
there's lot of threats. So, we're kind of like
on the Titanic and we've got no lifeboats. So, we should have some
lifeboats, it's smart for us to spread out and we don't
have that much time. I follow the argument and see
the logic - but, still, I feel I'm being, well, "tricked"
- with "good spirits", of course - by a
"statistical magician". When arguments seem tenuous,
check their assumptions. The Doomsday Argument is based
on several deep assumptions; one is called the "Observation
Selection Effect" - the position of the observer affects the
results of the observation. In the Doomsday Argument,
human beings living now are the ones assessing the likely
longevity of all human beings at all times. I go to Oxford, England,
to visit a leading expert - on these assumptions,
the Director of the Future of Humanity Institute
- Nick Bostrom. The doomsday argument is strange
because it seems to rely on very weak empirical premises. So, one way we could become
convinced that the world is dangerous and might end soon is
if you studied particular risks and you think about nuclear
weapons and designer pathogens. And you studied the
details of that and you think this looks really scary. But, the doomsday argument is
much more general and it says that whatever the prior
probability of human extinction that you come up with after
you have studied all of these individual disaster scenarios -
you should revise that upward after reflecting on your
position in the human species - conditional on two
different hypotheses. So, let me explain this
by means of an analogy. Suppose you have an urn with
balls in it; and the balls in this earn are numbered from one,
two, three upwards to the total number of balls. And you are not sure whether
the urn contains ten balls or a million balls. It's a big urn and it could
be almost empty or it could be full with balls. So, you've got to guess which
one it is, but you get one clue - you get to pick one ball
from this urn and pick it up and look at it. And so, you do that and you
find that it's number seven. Now, in this urn example,
picking ball number seven gives you very strong evidence that
the bowl is the ten-ball urn rather than the million-ball urn
because it's much more likely you would get number seven if
there are only ten balls than if there are a million. And so, that's uncontroversial,
but here is the analogy: think of the two different urns as two
different hypotheses about what will happen to
the human species. So, one hypothesis is that will
go extinct soon - maybe in a few decades - and there will have
been a total of, I don't know, a hundred billion humans will
ever have lived from the rise of our species to its end. And another hypothesis is
that we'll survive, you know, colonize the galaxy and maybe
we'll live for millions and millions of years; and there
might've been a total of a hundred trillion humans in total
- just pick two numbers. And suppose that after having
studied this specific risk, you think it's 50/50. And then corresponding to
picking this ball number seven, you think of yourself as a
random sample from all humans that will ever have lived; and
the number is your birth rank - your place in the
sequence of all humans. So, your number would be
around about 70 billion or so - that's how many people
have come before you. And so, the idea is that the
probability that you should find yourself with birth rank of 70
billion is much greater if there will only be 100 billion humans
in total than if there will be 100 trillion for the same
reason as in the urn case. And so, the conclusion of the
doomsday argument is that whatever the prior probability
of doom soon versus doom late, after reflecting on this,
you should sort of up the probability in doom soon
to take into account of your low birth rank. So, whatever probabilities may
cause human extinction based on nuclear weapons or asteroids
or biological warfare - whatever those probabilities
are, you're saying that the doomsday argument increases
those by some factor. That's right, which depends
on the exact numbers involved. So, that's the idea behind the
doomsday argument; and intuitive it seems it must be wrong
because you seem to get a lot of your information from no
sort of evidence, as it were. And there have been a lot of
attempts to explain why the doomsday argument fails and when
you look more carefully at these attempts to explain why
it's wrong, they tend to fail. Now, my view is that ultimately
the doomsday argument is inconclusive; but not for any
simple trivial reason, but for deep methodological reasons
that they have to do with observation selection theory
and the way we should reason about these things. But, the idea in the doomsday
argument that does the work is that in some sense you should
think of yourself as if you were a randomly selected
observer or a random human. And I think there is an
element of truth to that. It's not the silly ideas
that might appear. But, these arguments that
support that assumption - if you look carefully - don't
support strictly the assumption that just to think of yourself
as that random human from all humans that
will ever have lived. It supports a weaker assumption
that you should think of yourself as a random human from
some suitable set of humans - not necessarily the
whole human species. And if you pick a more narrow
reference class, you can avoid the doomsday argument. It sounds, though, that based
upon peoples' concern already about the possibility of human
extinctions, the doomsday argument at least puts an
additional caution into how humanity should behave. I would put it
slightly different. If the doomsday argument
were sound, then it would put a huge extra caution. If I'm correct in believing it's
not sound, then it might not actually carry
any weight at all. Now, if we are uncertain
about whether I'm right or not, then we might give it a little
credence and it might affect our probability
estimate slightly. Well, based on what it's talking
about - it's not talking about whether we go on vacation
next month, it's talking about human extinction. We probably should have
a little bit extra caution. My view is that we should
have exactly the degree of caution that the evidence
warrants; and at least if we were rational, there would be
no need to sort of hype up the probability of the disaster,
because even a tiny probability of humanity going extinct ought
to be enough to motivate us - to take whatever action
is needed to reduce that within reasonable limit. I agree with Nick - on both
accounts: The Doomsday Argument is likely not sound. And we must study the
survival of our species. One of the most eloquent
voices warning humanity about existential threats is the UK
Astronomer Royal, the author of "Our Final Hour" - Martin Rees. We sometimes think the
cosmologists, because they think about billions of years, are
somehow serene and relaxed about short-term problems. But, I worry as much as anyone
about what's going to happenÉ next year, next week,
or tomorrow. And I think actually that being
cosmologists gives one a special perspective on these issues
because although most educated people are aware of the billions
of years of the past, leading from simple life on earth to
humanity, they tend to think that we humans are in
some sense the culmination. Whereas one thing which we
learned from astronomy and cosmology, is that the time line
ahead is at least as long as the time that's elapsed up 'til now. The universe may go on forever,
but, even our earth and sun is less than halfway
through its life. Even in the cosmic perspective
of billions of years, this century is very special
and let me explain that by sort of cosmic vignette. Suppose you were an alien
who'd been watching the earth for its entire four and
half billion year history. Over most of that time change
would have been very slow. The continents would have
gradually shifted, species would have formed, evolved
and become instinct. Ice ages came and went. But then things started
to change more rapidly, about 10,000 years ago
vegetation started to change because of human agriculture. But then, change speeded up
immensely within just 50 or 100 years, 100th of one millionth
of the lifetime of the earth. The carbon dioxide in the
earth's atmosphere started to rise enormously fast, the earth
became a source of radio signals, the integrated
effect of all TVs, radars, mobile phones and the rest. And something else remarkable
happened, for the very first time, projectiles lifted from
the earth's surface, went in orbit around it, some even went
to the moon and the planets. If the aliens had been watching
this, they'd have seen something remarkable happening in
this tiny stretch of time. So, what's going to
happen in the next century? If the aliens keep watching,
what will they see? Will this some spasm, less then
halfway through the earth's life be followed by silence,
or will it lead to some new stable situation? And will some of these
projectiles leaving the earth eventually spawn new
oasis of life elsewhere? So, that's the challenge for us. So, these are the options,
which we can't predict and these aliens, if they understood
astronomy, could have predicted that the earth was going to
die in a few billion years when the sun flared up and
engulfed the inner planets and vaporized life on earth. But, could they have predicted
this sudden spasm happening less then half through its life,
with runaway speed. And I think if you look at
things that way, you realize that what we do here on earth
has an impact that will resonate not just through the life of
our children or grandchildren, but into the far future, here on
earth and perhaps far beyond, because this is a crucial
century even in the perspective of billions of years that
cosmologists talk about. I think if people are aware
that if we were to destroy our civilization, it would foreclose
potentialities of even a post-human era far beyond
the earth as well on it. Then that gives an extra motive
for concern; we are the stewards of this planet at
a special period. The Doomsday Argument warns that
we have underestimated the risk of human extinction. While the argument's simple
statistics seem to extract too much conclusion
from too little evidence, something is going on here. The Doomsday Argument
combines four big ideas - 1. The Copernican Principle
that human beings do not occupy a special location
in the Universe. 2. Our surprising temporal
position in humankind's short or long history. 3. The importance of the
observer in observations. 4. The survival of our species. John Leslie advises taking
existential risk seriously. Nick Bostrom advocates
studying existential risk. Martin Rees puts existential
threats into cosmological perspective and he stresses
the critical nature of our current century. I take another view. I still marvel at the vast
difference between the long ages of the universe and the
short span of human history. After billions of years in
preparation, could humanity's future be decided when we,
and our children, are alive? Could there be "something
special" about our generations, so we'd not be living
in "average times"? Could violating the Copernican
Principle bring us... closer to truth? For complete interviews
and for further information, please visit
www.closertotruth.com