[music] <i> Is this time our time,</i> <i> an axial period
in human history</i> <i> when, one way or the other,
for better or worse,</i> <i> the future of humankind
will be determined?</i> <i> I think it may be.</i> <i> I know that almost
every generation</i> <i> thinks itself special;</i> <i> however, this time, our time,
it may be true.</i> <i> I feel privileged
to be living now</i> <i> when science is making
astounding progress</i> <i>answering foundational questions</i> <i> that were once
sheer speculation.</i> <i> I marvel how much, how
quickly humanity has learned.</i> <i> What will science achieve</i> <i> in another 100 years,
1,000 years, a million years?</i> <i> Colonize the cosmos?</i> <i> Is that our destiny?</i> <i> But will humanity
have that future?</i> <i> What are the risks
if we don't make it?</i> <i> We write, we worry about
nuclear weapons, climate change,</i> <i> and dangers of
artificial intelligence.</i> <i> Here's what interests me:</i> <i> as science continues
to progress,</i> <i> what happens to
foundational questions?</i> <i> The mysteries of existence
and human sentience.</i> <i> For the future of humanity,
what will science bring?</i> <i> I'm Robert Lawrence Kuhn</i> <i> and Closer to Truth
is my journey to find out.</i> <i> To foresee the
future of humanity</i> <i> through the lens of science,</i> <i> I seek scientists and
philosophers of science</i> <i> who question current belief,</i> <i> who challenge
conventional wisdom.</i> <i> That's why I begin in Banff,</i> <i>in the Canadian Rocky Mountains,</i> <i> at a biennial conference</i> <i> of the Foundational
Questions Institute, FQXi,</i> <i> where physicists
and philosophers gather</i> <i> to discuss the deepest,</i> <i> most fundamental
issues in science.</i> <i> FQXi is no ordinary
scientific society.</i> <i> Here are polymaths,
and paradigm shifters.</i> <i> I speak with the
Scientific Director</i> <i> and co-founder of FQXi,</i> <i> a science visionary and
theoretical cosmologist,</i> <i> Max Tegmark.</i> Max, we're here
at FQXi Conference, your fifth and my fourth... I'll never catch up to you, and we have a
diversity of topics. Our chorus is quantum
physics, quantum cosmology, but we're expanding. We're talking a
little consciousness, the difference between life and
not life, social implications. As you see science,
from your perspective in terms of humanity's future, what are the kinds of
questions we should be asking? We should be asking what ultimately we want
our future to be like because, you know,
13.8 billion years, entire universe,
it's woken up. It has its conscious entities,
which is wonderful, and we manage to
understand more and more about how our world works which has, in turn, given us
great power through technology. And I'm optimistic that we
can use this technology to create a really
awesome future, if we can win this race between the growing power
of the tech and the wisdom with which we manage it. In the past, we've managed
to have the wisdom to keep up basically
with trial and error. You know, we invented fire,
messed up a few times, so we invented the
fire extinguisher. But with more
powerful technology like nuclear weapons,
synthetic biology, and future of very fast
artificial intelligence, we don't want to learn
from mistakes. We want to get it
right the first time because it might be
the only time we have. And so I think it's <i> very much the responsibility
of our scientists</i> to both engage with the public and talk more about what
sort of future you want, and then figure out
what the pitfalls are and help our fellow
humans figure out how to navigate around them. What are the issues with AI and how are you addressing them? I think artificial
intelligence will be the most powerful
technology ever. First, science helped us replace
our muscle power by machines that could lift heavier
things and move faster, and now, if AI succeeds, we're going to have machines that also replace all
our mental efforts and ultimately do
everything we humans can and even better. Of course, everything
I love about society is the product of intelligence, so if we can amplify
our own intelligence with machine intelligence, there's a great
potential for good, but, needless to say, there is also a lot of things
that can go wrong. We control this planet now, not because we're stronger than
tigers or have sharper claws, but because we're
smarter than them, and if we create machines
that are smarter than us, it's not guaranteed that
we can stay in control. <i> In the shorter term,</i> <i> there are also all
these questions,</i> <i> if we replace ourselves
in the job market,</i> how will we make sure that everybody still
has enough resources to live a reasonable life rather than having some sort
of horrible income inequality. How do we ensure that people, even if we can distribute the
wealth from the machines around, that people can find meaning
and purpose in their lives. I think we've never
had a technology that poses more basic questions about what we want it
to mean to be human. We've had a poor history, speaking as a member
of the human race, in controlling technologies <i> and we can say things
shouldn't be done,</i> <i>whether it's genetic engineering
or nuclear weapons,</i> and some groups of humans do it and then others feel compelled to at least not
let them take over, and that creates and arms race
in all of these categories. We're seeing the
beginnings of it with AI, with robots that
can kill people, that are under control now. So how can you assure that even though you
have the right ideas it's not a majority vote. You have to get almost
everybody on board, because if there are
outliers to exceptions that messes up the
whole strategy. -<i> Yes.</i> There's something we
need to do first, namely, identify what are the
really tough questions we need to answer and then do research
to answer them. There's an encouraging
precedent from biotech where, in the '70s, people
were like, hey, wait a minute. Maybe we should think a
little bit about hard questions before we start messing
with human, the genome, and a lot of
thought went into this and a lot of good
guidelines came out and we have not had any
terrible disasters since. Right now is precisely the time when we need<i> to start
thinking about this in AI</i> <i> because we're beginning to get
self driving cars on the street</i> <i> and a lot of jobs
are automating away</i> and so we should start
researching it now, you know, not the
night before we need it. To look out 100 years,
1,000 years, and to force yourself to answer
unanswerable questions, you know, 50 years in the
future everything is like magic, but if you had to go out
there to that period of time, to envision what that world
could look like what would you say? I'm not so interested
in speculating about how fast it's
going to happen, I'm much more interested in
actually getting to work in trying to answer some
of these tough questions that we're going to need
whenever it happens because if you want to win
this wisdom race again, and have wisdom keep pace
with the power of technology, and there's this huge funding in
just making it more powerful, and no funding at all
for developing the wisdom, then rather than try to
slow down the juggernaut, instead, let's
invest in research. What are the tough questions? For example, if you create a very advanced
artificial intelligence system, how do you ensure that it's actually going
to do what you want? How do you make<i> machines
learn what we humans want?</i> Our children learn a lot
from observing our behavior, but we don't quite know
how they view them, how can you guarantee that,
as machines get smarter, they'll still want
the same things, and what do we want anyway? Whose values
should they have? This isn't just for
the nerds to study, this is for
everybody to discuss. [music] <i> Max worries about the
unintended consequences</i> <i> of scientific progress</i> <i> that, given the
exponential growth</i> <i>in artificial intelligence, AI,</i> <i> soon to far surpass
human intelligence,</i> <i> if something goes wrong</i> <i> humanity may not
get a second chance.</i> <i> Should we worry?</i> <i> Does super AI
loom over humanity</i> <i>as these mountains loom over us?</i> <i> Not everyone calls AI
an existential threat,</i> <i> nor overly frets about
the future of science.</i> <i> I speak with an expert</i> <i> on quantum computing
and information,</i> <i> a physicist trained
in philosophy,</i> <i> a unique combination,
Seth Lloyd.</i> Seth, we like talking
about these very abstract topics of existence at our
FQXi Conference, where we are here in Banff. If we look forward
into the future, not just 10 years or 50 years, but 100 years or 1,000 years, what could be the
importance and the meaning of what we're doing today in the seemingly abstract
intellectual exercise? Well, I think we'll find,
in the future, that most of it was
just a bunch of BS. But some of it will not be, and the some of it that is
not will be very important. One of the most
remarkable things that can happen to science is it becomes engineering. I mean, if you look
at quantum mechanics, you know, the quantum mechanics
of electrons and matter became the understanding
of semiconductors which became the creation
of semiconductor amplifiers, of digital computers, which gave rise to a
whole host of things that we would never
ever have believed could have happened
50 years ago. Your smart phone, which is more powerful than a
Cray computer in the 1970s. Yes, and in fact, capable
of even bigger stupidity. One of the main things about
artificial intelligence is it leads to real stupidity. So I think we are
going to see things that we would never have
thought have the application from this kind of pure science
that we're doing right now, it might actually have actual
engineering applications in the future. We don't know what
those are going to be, but they're likely
to transform the way that we live as human beings. Can you imagine
what those could be? I mean, are we talking
about cyborgs and integration with robots and non-biological intelligences that are merged with ourselves? I mean, what is there... are we disturbing what
it means to be human? Well, I can't predict the
things I have no notion about, but some things that you
can extrapolate about are, you know, as our computers
and smart phones become more powerful and as we get more sophisticated
in programming them using, for example, machine
learning techniques then they're going
to behave in ways that are much more human, and they already are behaving
ways that are very human. I think one of the most
human things a computer can do is really mess with your brain
in a truly unexpected fashion, and computers are doing
more and more of that. I think it's quite likely that, there's a long debate about whether machines
can be intelligent, whether they can be conscious. I should think this debate is,
in some sense, irrelevant because what's
likely to happen is that as they become more smart, better learning at
what's going on with us, better at interacting with us we will simply treat them
as conscious beings. You know, once somebody says, hey, don't turn
off my smart phone, it's got some thinking
and dreaming to do, you know, at that
point it doesn't matter whether it's conscious or not, we're treating them
as conscious. And what would be the
impact on humanity when that happens? Well, we'll have a new friend. Is that good or bad. It could be good,
it could be bad. There's a current
bunch of loose talk when people like Steven Hawking,
the elon musk, is that computers are going
to take over the world and artificial malign
artificial intelligence will destroy humanity. I think this is just silly. First of all, we're way
away from that right now, and rather simple precautions
will prevent it from happening. What about the sense of human
existence in the great cosmos? Will we get to a point where we really understand
more than we do now, or will the awe and mystery
continue to get deeper? Human beings have not changed a
lot in the last 100,000 years from their genetic makeup and the way we
think about things. I don't think things are
going to change that much. And, actually, in some sense, you know, technology just
provides distractions. It's not clear that someone who walks
around with their phone like this is leading a
more spiritual existence than someone who walks around taking in the beautiful water
and the sun, the mountains and, in fact, I think
quite the opposite. [music] <i> Seth sees the human future
not unlike the human presence,</i> <i> but with our species adapting
to the ubiquity of robots,</i> <i> super smart and
apparently conscious,</i> <i> will there be a
difference between,</i> <i> say, a super smart GPS</i> <i> and an apparently
conscious humanoid?</i> <i> How many such humanoids
will there be?</i> <i> Will every human own a few?</i> <i> Will humanoids demand their
rights, resist abuse, organize?</i> <i> Can we comprehend the
psychological and social impact?</i> <i> Encompassed by these
forests and rivers,</i> <i> all seems well with the world,
but I fear it is not so.</i> <i> I fear the struggles to come.</i> <i> But there's more to the
future than survival.</i> <i> In the long future</i> <i> will the profound questions
of existence still be pursued?</i> <i> What will be the big questions
of our distant progeny?</i> <i>I ache to know the big questions
of generations to come.</i> <i> I speak with the Associate
Scientific Director of FQXi,</i> <i> cosmologist Anthony Aguirre.</i> Anthony, it's great to
be here with you again. This our fourth Conference
together, fourth FQXi. We've dealt with
multiverse and cosmology, nature of time,
nature of information, nature of the observer versus
events in quantum physics. Can you integrate
all this together from the FQXi viewpoint, how does this impact
the understanding that humanity has for science and the importance of science
as we look to the future? What is the significance
of what we're doing? One of the things
I love about FQX and being part of it is that people are thinking
on such a big scale, and when you think about society and sort of where
we are as humans and where we are in
our solar system and in the universe and in
the history of the universe, it's an amazing thing that
the universe has gone on for this 13.8 billion years, and a lot of it, you know,
at some level unobserved, or at least unobserved by things that enjoy the sort of
consciousness and appreciation and who get to think about
it like we do at FQX. And the question is,
is that a blip in the radar? Are we going to sort of
have this glorious time for a little while and then kill each other off
and that's it, and the universe
goes back to sleep for another billions
and billions of years, or, you know, is this
the great awakening that's going to go on and sort of fill up
billions of light years with sentient beings getting
to do all this awesome stuff? It's about not just
who we are now but who are we going to become, what are our descendents, are they going to be us, are they going to be
super intelligent AI's, you know, what are
they going to be? And what can we sort of do about
that now by understanding, you know, what is an observer,
what is conscious, what is matter,
and those sorts of questions that are part and parcel of
what FQX likes to think about. So, FQXi foundational questions, when you deal with time, information,
multiverse, observers, why are those considered
foundational questions? You know, foundation
is a sort of word we've given to the questions we think are really
cool and interesting. That's the secret. That's the dirty secret of FQX, but they're, you know,
I think at some level, things that have to do with
the sort of big questions, the big nature of reality,
the hard problems. So it's hard to say exactly what makes something
foundation and FQX-y. But you sort of know
when we see it, when you're like, yeah,
that's really cool and that, whoa,
that's confusing. That's the kind of
question we like. One way of defining
it in a funny way is that we talk
about the things that you can't get a
normal science grant for. That is certainly the case, that is certainly one way
to define it by exclusion. But that covers
a lot of things and it's tricky
to find the ground that is still science
and still rigorous and still, you know,
you can talk about it with real scientific methodology and yet is far enough out,
sort of on the edge, of what people can think about that it's hard to get
conventional funding. And you have a core
way of thinking, which is generally physics,
which underlies this approach. There are other ways of
knowing in the world, you know, arts and humanities
and biological, but the core of this is physics, which is one view of the world to view these kinds
of questions. That's right. [music] <i> We are the cosmos,
self aware and self reflective,</i> <i> a brief coalescing of the
scattered dust of dying stars</i> <i> generating somehow
conscious cognition,</i> <i> the cosmos
comprehending itself.</i> <i> To me, for human
consciousness to come about</i> <i> after almost 14 billion years
of cosmic history</i> <i> and then to self extinguish
defies common sense,</i> <i> if not scientific logic.</i> <i> I cannot escape the
unfashionable sense</i> <i> that meaning and purpose
is about in the cosmos,</i> <i> that human beings
are somehow central.</i> <i> That's why the future, to me,</i> <i> is not so much about
amazing gadgets,</i> <i> however magical they may seem,</i> <i> but more about
foundational questions,</i> <i>how they will endure and change
or, perhaps, remain the same.</i> <i> What's the real value of
foundational questions?</i> <i> Can ultimate answers be found?</i> <i> I push to the far future.</i> <i> I speak with a
visionary physicist</i> <i>who connects consciousness with
what the universe is about,</i> <i> Paul Davies.</i> Paul, we're at the
FQXi Conference. Normally we deal with
very fundamental things in physics and cosmology-time,
information, multiverse. This time we're branching
out a little bit. We're talking about
science and society, what the future is. How do you see the relationship between fundamental questions
and physics and cosmology, even consciousness
with the real world and what humanity is becoming,
and will be becoming and take a long time horizon, you know, not just
a few decades, but 100 years, 1,000 years,
a million years? People like me are interested in these foundational
questions in science and, you might say, well why does society
pay for this, and I'd be quoting
John Wheeler a lot. I can remember him
once saying to me, "I don't know why
society is prepared to support people like us who investigate
these deep issues, but so long as they do
we've got to hang in there and get these results." And so it is amazing,
but in my view important, that society does allocate some small fraction of resources to addressing these
really fundamental questions because I place them
on the same footing as religious questions,
and they overlap a lot. You might say, well, you know, why does society
allocate all these resources to build churches
and temples and so on, what does that do for the GDP? You know, isn't this just using
valuable time and resources <i>for something that's completely
irrelevant to the economy</i> but, nevertheless,
it fulfills a human need, to try to understand what their place is in the
great scheme of things and sort of look beyond and to try to have some
appreciation of reality that is beyond just
the daily round. And these deep
questions in science, I think we do them
for the same reasons. We'd like to know how the
universe is put together and what our place is in that. I'm sure that quest
will continue because we haven't
got there yet. We haven't solved
all the problems. We solve one problem
then another pops up. During my career we've
made enormous progress and it's easy to enumerate
the sort of things that have been discovered, and just in the last
year gravitational wave, the Higgs Boson, <i> these well known discoveries,</i> but we go on asking
more and more questions. Will be still be doing this
in 1,000 years, 10,000 years? I think that there is a
problem the trap has made and one is that if every
time we answer a question another one just pops up
to replace it, there will probably come a time
when society will think, well, these scientists,
you know, it's a bottomless pit and they're never going to
sort of finally get there and give us this ultimate
theory of everything. On the other hand,
if they do get there and we have an ultimate
theory of everything then the job is over. <i> And so there has
been a golden age.</i> Feynman commented on this, but we've had these
many, many discoveries which have changed
our world view but also changed society in, like, the technological
innovations. We can't go on piling up
those discoveries at the same pace
forever and ever. It's going to tail
off at some stage and I don't know when. You say 1,000 years
in the future, I don't know whether by then
we'd have run out of steam or run out of problems
or answered all the problems. But probably, this
thing we call science will be replaced by
something a bit different, a different mode of thought, and part of that is surely that we're going to be
accompanied in this quest by what some people call
artificial intelligence. I think we lack the term to describe designed
intellectual systems. And, at that final time, that asymptote of
knowledge and science, will we feel confident that we've described all
of the fundamental issues of why the universe exists and why is there
anything at all, or where do the laws
of physics come from, or what's the nature of
our own consciousness? There will be
different criteria. Different people will have different levels
of satisfaction. There will always
be philosophers. -That are unsatisfied.
-<i> That's right.</i> Who will pull apart our wonderful theories
and explanations and say, well, this wasn't defined and that doesn't mean
anything and so on, so I suspect that there
will always be people that will extol the mystery. And so do you think that
there is an ultimate mystery? I've written, over the years,
with great confidence that, in principle, we could come to understand
everything about existence. But I have to say that
probably in the last 10 years I have come around to think that
that is an unachievable goal, probably even in principle. There will always be
somehow a mystery at the end of the universe. [music] <i> Science will determine the
future in two big ways.</i> <i> The first is via
advanced technology,</i> <i> especially artificial
intelligence</i> <i> and genetic engineering,
changing human society</i> <i> and perhaps altering
the human species.</i> <i> Utopian benefits
and doomsday risks.</i> <i> The second way is via
foundational questions.</i> <i> Time and space,
energy and matter,</i> <i>universe and multiple universes,
brain mind consciousness...</i> <i> will these questions become,
over time,</i> <i> more central to humanity?</i> <i> I suspect so.</i> <i> Is the human species, perhaps
with other sentient creatures,</i> <i> if such exist,
the universe waking up</i> <i> and becoming aware of itself?</i> <i> A nice metaphor
but is it [inaudible].</i> <i> I am amazed how
much we've learned</i> <i> in such a short
sliver of cosmic time.</i> <i> Amidst almost 14 billion years
of cosmic history,</i> <i> only a few thousand years
of human history,</i> <i> only a few hundred years
of real science,</i> <i> and only about a century
of modern science.</i> <i> Humanity penetrates
the foundations</i> <i> of cosmos and consciousness.</i> <i> Is there deep meaning
and profound purpose</i> <i> in our stunning and
sudden understanding?</i> <i> That's the question.</i> <i> That's the probe to
get closer to truth.</i>