JOHN BRACAGLIA: Hello,
my name is John Bracaglia and I work at Verily, which is
Google's life sciences company. I also lead a group called
The Singularity Network, which is an internal
organization composed of more than 3,000
Googlers, focused on topics about the future
of artificial intelligence, for which we are here today. And it's my pleasure to be here
today with Dr. Max Tegmark. As a brief introduction,
Max Tegmark is a renowned scientific
communicator and cosmologist, and has accepted
donations from Elon Musk to investigate the
existential risk of advanced artificial
intelligence. His research interests include
consciousness, the multiverse, advanced risk from
AI, and formulating an ultimate ensemble
theory of everything. Max was elected fellow of
the American Physics Society in 2012, won "Science"
magazine's Breakthrough of the Year in 2003,
and has written over 200 publications,
nine of which have been cited
more than 500 times. Max Tegmark, everyone. [APPLAUSE] MAX TEGMARK: Thank you so much. It's a really great honor
to be back here at Google and to get to talk in front
of so many old friends, and so much human-level
intelligence and idealism. Does anyone recognize? NASA EMPLOYEE: 20
seconds and counting-- MAX TEGMARK: This
was, of course, the Apollo 11 moon
mission that put Neil Armstrong, Buzz Aldrin,
and Michael Collins on the moon. NASA EMPLOYEE: Ten, nine, MAX TEGMARK: This-- NASA EMPLOYEE: Ignition
sequence starts. MAX TEGMARK: --mission
was not only successful, but I think it was very
inspiring because it showed that when we humans
manage technology wisely, we can do things that our
ancestors could only dream of, right? Now there are some
important lessons I think we can learn from
this, as well, so I want to devote the rest of this talk
to another journey, powered by something much more
powerful than rocket engines, where the passengers are
not just three astronauts, but all of humanity. So let's talk about
our collective journey into the future with AI. My friend Jaan Tallinn
likes to emphasize that just as with
rocketry, it's not enough to just make our
technology powerful. We also have to focus on
figuring out how to control it and on figuring out where
we want to go with it. And that's what we're
going to talk about. I think the opportunities
are just so awesome, if we get this right. During the past
13.8 billion years, our universe has transformed
from dead and boring to complex and interesting,
and it has the opportunity to get dramatically more
interesting in the future if we don't screw up. About 4 billion years ago, life
first appeared here on Earth, but it was pretty dumb
stuff, like bacteria, that couldn't really learn
anything in their lifetime. I call that life 1.0. We are what I call life 2.0
because we can learn things. Which, of course,
in geek speak means we can upload new
software modules. If I want to learn Spanish,
I can study Spanish, and now I have all these new
skills uploaded in my mind. And it's precisely
this ability of humans to design their own software
rather than be stuck with what the software evolution gave
us, which has enabled us to dominate this
Earth, give us what we call cultural evolution. We seem to be gradually
heading towards life 3.0, which is life that can design not
just its software, but also its hardware. Maybe we're at 2.1
right now because we can get cochlear implants
and artificial knees, and a few minor
things like this. But if you were robots that were
able to think as cleverly as right now, of course, there
would be no limits whatsoever to how you could
upgrade yourselves. So let's first talk about
the power of the technology. Obviously, the power of AI has
improved dramatically recently. I'm going to define
intelligence itself, just very broadly,
as the ability to accomplish complex goals. I'm giving such a
broad definition because I want to
be really inclusive and include both all forms
of biological intelligence and all forms of
artificial intelligence. And as you guys here
at Google all know, obviously, a subset of
artificial intelligence is machine-learning, where
systems can improve themselves by using data from
their environment, much like biological
organisms can. And another subset of
that is, of course, deep learning, in which we
use neural-net architectures. And if you look at older
breakthroughs in AI, like when Gary Kasparov got his
posterior kicked by IBM's Deep Blue, the intelligence here
was, of course, mainly just put in by human programmers. And Deep Blue beat Kasparov just
because it could think faster and remember better. Whereas, in contrast,
the recent stuff that you've done here at
Google, like this work by Ilya Sutskever's group, there's
almost no intelligence at all put in by
the humans, right? They just trained
a simple neural-- with a bunch of data. And you put in the numbers that
represent the pixel colors, and it puts out this
caption-- "a group of young people playing
a game of Frisbee," even though the software
was never taught anything about what a Frisbee
is, or what a human is, or what a picture is. And the same stuff, if
you put in other images, it gives other captions, which
are often quite impressive. I find it even more
striking how cool things can be done with video. So this is Google
DeepMind, of course, learning to play Atari games. And for those of you-- those few of you who
haven't seen this before, you need to remember that
this neural network, here, with simple
reinforcement-learning built in, had no idea what a
game was, what a paddle was, what a ball was, or
anything like that. And just by practicing,
gradually it starts to miss the ball less
often and got to a point where it hardly
missed it at all, and plays much better
than I could play this. And the real kicker
is that, of course, the people at
DeepMind, they actually didn't know that there was
this clever trick you can do when you play Breakout, which
is always aim for the corners and try to build a little--
do a little tunnel, there. So once this little
deep-learning software figured that out, it's
just every single time the ball comes back-- look how
just uncannily precise it is. It's just putting it right
back there in the corner and playing. I can only dream
to play this well. Now this is, of course, a
very, very simple environment, that little
two-dimensional game world. But if you're a robot, you can
think of life as a game, just a more complex one. And you could ask
yourself to what extent these sort of techniques
might enable you to learn more interesting things. And so DeepMind more recently,
had three-dimensional robots in a simulated
world, and just asked to see if they could learn
to do things like walk. And this is what happened. [MUSIC PLAYING] This software had never, ever
seen any videos of walking. It knew nothing about
the concept of walking. All the software was doing was
sending random commands to how to bend the different
joints, and it got rewarded every time
this creature managed to move a little bit forward. And it looks a bit funky,
maybe a little awkward, but hey, it actually
learns interesting stuff. So this raises this very
interesting question of how far can AI go? How much of what we humans
can do will machines, ultimately, be able
to do, if we use not just the techniques that we
know of so far, but factor in all sorts of additional
progress that you people in the room and elsewhere
are going to do? I like to think about this
in terms of this landscape. I drew this-- I made this picture inspired
by a paragraph in one of my favorite books by Hans
Moravec from many years ago, where the height,
here, represents how difficult it is for a
computer to do a certain task. And the sea level represents
how good computers are at doing them right now. So what we see here
is that certain tasks, like chess-playing
and arithmetic have, of course, long been
submerged by this slowly rising tide of machine intelligence. And there are some
people who think that there are certain tasks,
like art and book-writing, or whatever, that machines
will never be able to do. And then there are
others who think that the old goal of
AI to really solve intelligence and do
everything that we do will mean that sea levels will
eventually submerge everything. So what's going to happen? There have been a lot of
interesting polls of AI researchers and the
conclusion is very clear-- we don't know. A little bit more specifically,
though, what you find is there are some people in
the techno-skeptic camps who think that AI research
is ultimately doomed. We're never going to get there. Or maybe we're only
going to get there hundreds of years from now. But actually, most
AI researchers think it's going to happen
more in a matter of decades. And some people
think that we don't have to worry so much
about steering this rocket, metaphorically speaking, because
it's not going to happen, that we'll ever get
powerful enough that we have to worry about this-- but that's a minority. And then there are
people who think we don't have to
worry about steering because it's guaranteed that the
outcome is going to be awesome. I call such people
digital Utopians. And I respect that
point of view. And there are also people
who think it's guaranteed that things are going
to suck, so there's no point in worrying
about steering because we're screwed anyway. But most of the
people in surveys tend to land more here, in
the middle, in what I've called the beneficial AI
movement, where you're really motivated, actually,
to ask, what can we do right now to steer
things in a good direction? Because it could be awesome,
or it could be not so great, and it depends on
what we do now. I put this web page
up, AgeofAI.org. We did a survey there for
people from the general public could answer these
same questions. You can go there and do it, too. And I was actually
very interested that the general public response
was almost exactly the same as AI researchers have
done in recent polls. This is from something
I analyzed this weekend with 14,866 respondents. And you see most people
think maybe we're decades away from human-level
AI, maybe it'll be good, maybe there'll be problems. So this is maximally
motivating to think about how we can steer this
technology in a good direction. So let's talk about steering. How can we control-- how can we learn to control AI
to do what we want it to do? NASA EMPLOYEE: Lift-off. MAX TEGMARK: To help with
this, my wife Meia, who's sitting there, and I,
and some other folks, founded The Future
of Life Institute. And you can see we actually
have the word 'steer' up here in our mission statement. Our goal is simply to do
what we can to help make sure that technology is
beneficial for humanity. And I'm quite optimistic that
we can create a really inspiring future with technology,
as long as we win this race
between the growing power of the technology
and the growing wisdom with which we manage it. But I think if we're
going to win this race, we actually have to
shift strategies, because technology is gradually
getting more powerful. And when we invented less
powerful tech, like fire, we very successfully
used the strategy of learning from mistakes. We invented fire-- oopsy-- and then invented the
fire extinguisher. We invented the car-- oopsy-- and then we invented
the seat belt, the airbag, the traffic light, and things
were more or less fine. But when you get beyond
the certain point of the power of the
technology, this idea of learning from
mistakes is just really, really lousy, right? You don't want to make mistakes
if one mistake is unacceptably many. And when we talk about nuclear
weapons, synthetic biology, and certain
superhuman AI, I feel we're at the point
where we really don't want to make mistakes. We want to shift
strategy from being reactive to being
proactive, which is exactly the slogan you said you're
also using for your work here at Google, earlier. I'm optimistic that we can do
this if we really focus on it and work for it. Some people say, nah,
don't talk about this because it's just
Luddite scaremongering when you talk about things
that could go wrong. I don't think it's
Luddite scaremongering. I think it's safety engineering. We started by talking about
the Apollo moon mission. When NASA thought
through, very carefully, everything that could
possibly go wrong when you put three
astronauts on top of this 100-meter tall rocket
full of highly explosive fuel, that wasn't Luddite
scaremongering. What they were doing was
precisely what ultimately led to the success of the mission. And this is what I think we
want to be doing with the AI as well. I think so far, what we've
learned from other technologies here is that we need to
up our game a little bit because we haven't really
absorbed this idea that we have to switch to being proactive. Today is a very special day
in terms of nuclear weapons, because we came pretty
close to September 26 being the 34th anniversary
of World War III. In fact, it might
have ended up this way if this guy, Stanislav Petrov,
hadn't just, on gut instinct, ignored the fact that his early
warning system said that there were five incoming
Minuteman US missiles that should be retaliated against. So how can we do better? How can we win this wisdom race? I'm very, very happy that
the AI community has really started to engage with these
issues a lot in recent years. And thanks to a lot of
people who are in this room here, including Peter Norvig,
and with The Future Life Institute, we organized a couple
of conferences in Puerto Rico. And then earlier this year
in Asilomar, California, where there was really
quite remarkable consensus around a number of very
constructive things we can do to try to
develop this wisdom and steer things in
the right direction. And I want to spend just a
little bit of time hitting some highlights of things here,
from this list of 23 Asilomar Principles, which has now
been signed by over 1,000 AI researchers around the world. First of all, it says
here on item one, that we should define
the goal of AI research not to be just making
undirected intelligence, but to make beneficial
intelligence. So in other words, the
steering of the rocket is part of the design specs. And then there was
also very strong consensus that, hey, if we have
a bunch of unanswered questions that we need to answer, we
shouldn't just say, oh yeah, we should answer them. Well, we should answer them
the way we scientifically know is the best way to
answer hard questions, namely, to research
them, to work on them. And we should fund
this kind of research as just an integral part of
computer science funding, both in companies
and in industry. And I'm actually very,
very proud of Google for being one of the founding
members of the partnership on AI, which aims
very much to support this kind of AI research-- AI safety research. Another principle here that
was very broad agreement was the shared
prosperity principle, that the economic
prosperity created by AI should be shared broadly
to benefit all of humanity. What do I mean by that? Obviously, technology has
kept growing the economic pie. It's been growing our GDP
a lot in recent decades, as you can see if you look
at the top line in this plot, here. But as you're also
generally aware of, this pie hasn't been
divvied up quite equally. And in fact, if you look at the
bottom 90% of income earners, their income has stayed flat,
almost since I was born. Actually, maybe it's my fault.
And the 30% poorest in the US have actually gotten
significantly poorer in real terms, in
recent decades, which has created a great deal of
anger, which has given us the election of Donald Trump. It's given us Brexit. And it's given us a more
polarized society in general. And so there was a very strong
consensus among AI researchers that if we can create so much
more wealth and prosperity, and have machines help produce
all these wonderful goods and services, then
if we can't make sure everybody gets better off
from this, shame on us. Some people say, well,
this is just nonsense because something magical
is going to change in these statistics soon. And the jobs that
get automated away are going to be replaced by
much better, new jobs that don't exist yet. But actually, if you
look at this data, it doesn't support that. We could have made that
same argument 100 years ago, when much more people
worked in farming, that all those
jobs that were lost were going to be replaced by
new jobs that didn't exist yet. And this is what
actually happened. This is-- I made this little
pie chart, here, of all the jobs in the US by size. And you can start going down
the list-- managers, drivers, retail salespersons,
cashiers, et cetera. Only when you get
down to 21st place do you get to a job
category that didn't exist 100 years ago,
namely, software developers. Hi, guys. So clearly what happened is
not that most farmers became software developers. What instead happened was
people who lost, generally, from the Industrial
Revolution and onward, jobs where they were using
their muscles to do work, went into other jobs
where they could use their brains to do work. And these jobs tended
to be better paid, so this was a net win. But they were jobs that
already existed before. Now what's happening
today, which is driving the growth
in income inequality, is similarly that
people are getting switched into other jobs
that had existed before. It's just that this
time, since the jobs that are being automated
away are largely jobs where they'll use
their brains, they often switch to new jobs that
existed before that pay less, rather than pay more. And I think it's a really
interesting challenge for all of us to think about
how we can best make sure that this growing
pie makes everybody better off. Another item here on this
list is principle number 18-- the AI arms race. This was the one
that had the highest agreement of all among
the Asilomar participants. "An arms race in lethal
autonomous weapons should be avoided." Why is that? Well, first of all,
we're not talking about drones, which are
remote-control vehicles where a human is still
deciding who to kill. We're talking here about
systems where the machine itself, using
machine-learning or whatever, decides exactly who
is going to be killed, and then does the killing. And first, whatever
you think about them, the fact is, although
there's been, of course, a huge
amount of investment in civilian uses of
AI recently, it's actually dwarfed by talk
about military spending here, recently. So if you look in the
pie, there's a real risk that the status
quo will just mean that most of the loud sucking
noise trying to recruit AI graduates from MIT and
Stanford and elsewhere, will be to go to military
places rather than to places like Google. And most AI researchers
felt that that would be a great shame. Here's how I think about it. If you look at any
science, you can always use it to develop new
ways of helping people, or new ways of harming people. And biologists fought
really, really hard to make sure that
their science is now known as new ways
of curing people, rather than for
biological weapons. They fought very hard and
they got an international ban on biological weapons passed. Similarly, chemists managed to
get the chemical weapons banned by really speaking
up as a community and persuading politicians
around the world that this was good. And that's why you
associate chemistry now mainly with new materials. And it's very stigmatized
to have bioweapons. So even if some
countries cheat on them, it's so stigmatized
that Assad even gave up his chemical weapons
to not get invaded. And if you want to buy
some chemical weapons to do something silly, you're
going to find it really hard to find anyone who's going to
sell them to you because it's so stigmatized. What there is very widespread
support for in the AI community is exactly the same
thing here, to try to negotiate an
international treaty where the superpowers
get together and say, hey, the main winners of
having an out-of-control arms race and AI weapons is not
going to be the superpowers. It's going to be ISIS and
everybody else who can't afford expensive weapons,
but would love to have little cheap things
that they can use to assassinate anybody with anonymously,
and basically drive the cost of anonymous
assassination down to zero. And this is something that if
you want to get involved in, the United Nations is going
to discuss this in November, actually. And I think the more vocal the
AI community is on this issue, the more likely it is
that that AI rocket here is going to veer in
the same direction as the biology and
chemistry rockets went. Finally, let me say a little
bit about the final Asilomar Principles here. I find it really
remarkable that even though a few years ago, if
you started talking about superintelligence
or existential risks, or whatever, many
people would dismiss you as some sort of
clueless person who didn't know anything about AI. These words are in
here, and yet this is signed by Demis Hassabis,
the CEO of DeepMind. It's signed by Peter Norvig,
who's just sitting over there, by your very own Jeff
Dean, and by, really, a who's-who of AI researchers,
over 1,000 of them. So there's been a much
greater acceptance of the fact that, hey, this is part of this
theory of maybe AI is actually going to succeed
and maybe we need to take these sort of
things into account. Let me just unpack
a little bit what the deal is with all of this. So first of all, why
should we take it seriously at all, this idea of
recursive self-improvement and superintelligence? We saw that a lot
of people expect we can get to human-level
AI in a few decades, but why would that
mean that maybe we can get AI much smarter
than us, not just a little? The basic argument for this
is very eloquently summarized in just this
paragraph by IJ Good, from 1965, a mathematician
who worked with Alan Turing to crack codes
during World War II. I think you've mostly
heard this all before. He basically says that if we
have a computer, a machine, that can do everything
as well as we can, well, one of the things we
can do is design AI systems, so then it can, too. And then you can hire--
instead of hiring 20,000 Google employees to do
work for you, you can get 20 million little
AI things working for you, and they can work much faster. And that the speed
of AI development will no longer be set
by the typical R&D time scale of humans, or years,
but by how fast machines can help you do this, which
could be way, way faster. And if it turns out that we
have a hardware overhang, where we've compensated for
the fact that we really are kind of clueless
about how to do the software of
human-level AI by having massive amounts
of extra hardware, then it might be
that you can get a lot of, through
improvements first, even just by changing the software,
which is something that can be done very, very
quickly, without even having to build new stuff. And then from there
on, things could get-- you might be able to get
machines that are just dramatically smarter than us. We don't know that
this will happen, but basically,
what we see here is that, for linear
researchers viewing this, is at least a possibility
that we should take seriously. Another thing which you see
here is existential risk. So more specifically,
it says here, "risks posed by AI systems,
especially existential risks, must be subject to the
planning and mitigation efforts commensurate with
their expected impact." And existential
risk is a risk which basically can include
humanity just getting wiped out altogether. Why would you possibly
worry about that? There are so many absolutely
ridiculous Hollywood movies with terminator
robots or whatever, that you can't even
watch without cringing. So what are the serious
reasons that people like this sign on to something
that talks about that? Well, the common criticism
that you hear is that, well, machines-- there's no reason to think that
intelligent machines would have human goals if we built them. And after all, why
should they have sort of weird, alpha-male goals
of trying to get power, or even self-preservation? My laptop doesn't protest when
I try to switch it off, right? But there's a very
interesting argument here I just want to share
with you in the form of this silly
little fake computer game I drew for you here. Just imagine that you are this
little blue, friendly robot whose only goal is to save
as many sheep as possible from the big bad wolf. You have not put into
this-- this robot does not have the goal of surviving,
or getting resources, or any stuff like that. Just sheep-saving. It's all about these
cute little sheepies, OK? It's going to-- very
quickly, if it's smart-- figure out that if it walks
into the bomb here and blows up, then it's not going to
save any sheep at all. So a subgoal that it will
derive is-- well, actually, let's not get blown up. It's going to get a
self-preservation instinct. This is a very generic
conclusion if you have a robot, then you program it to
walk to the supermarket and buy you food and
cook you a nice dinner, it's going to, again,
develop the subgoal of self-preservation because
if it gets mugged and murdered on the way back with
your food, it's going to not give you your dinner. So it's going to want to
somehow avoid that, right? Self-preservation
is an emergent goal of almost any goal that
the machine might have, because goals are hard to
accomplish when you're broken. And also, if the robot-- the robot might develop,
find, have an incentive to get a better model of
the world that's in here, and discover that
there is actually a shortcut it can take to get
to where the sheep are faster, then it can save more. So trying to understand more
about how the world works is a natural subgoal
you can get, no matter whatever fundamental goal you
program the machine to have. And then resource
acquisition, too, can emerge, because when
this little robot here discovers that when
it drinks the potion, it can run twice as fast,
then it can save more sheep. So it's going to
want the potion. It'll discover that
when it takes the gun, it can just shoot the wolf and
save all the sheep-- great. So it's going to want
to have resources. As I've summarized
in this pyramid, here, this idea, which
has been very eloquently-- it was mentioned first by
Steve Omohundro, who lives here in the area, and is talked a lot
about in Nick Bostrom's book. The idea is just that
whatever fundamental goal you give a very intelligent machine,
if it's pretty open-ended, it's pretty natural to expect
that it might develop subgoals of not wanting to be switched
off, and try to get resources. And that can be fine. There's not
necessarily a problem, being in the presence of
more intelligent entities. We all did that as kids,
right, with our parents? The reason it was fine
was because their goals were aligned with our goals. So therein lies the rub. We want to make sure that if
we ever give a lot of power to machines of intelligence
comparable or greater to ours, that their goals
are aligned with ours. Otherwise, we can be in trouble. So to summarize, these
are all questions that we need to answer,
technical research questions. How can you make-- how can
you have machines learn, adopt, retain our
goals, for example? And let me just show you
a very short video talking about these issues in
superintelligence and then some. [CLICKS KEYBOARD] And let's see if we have better
luck with video this time. [VIDEO PLAYBACK] - "Will artificial intelligence
ever replace humans?" is a hotly-debated
question these days. Some people claim
computers will eventually gain superintelligence, be
able to outperform humans on any task, and
destroy humanity. Other people say, don't worry. AI will just be another
tool we can use and control, like our current computers. So we've got physicist and
AI researcher Max Tegmark back again to share with
us the collective takeaways from the recent Asilomar
conference on the Future of AI that he helped organize. And he's going to help separate
AI myths from AI facts. - Hello. - First off, Max, machines,
including computers, have long been better
than us at many tasks, like arithmetic, or weaving,
but those are often repetitive and mechanical operations. So why shouldn't I
believe that there are some things that are simply
impossible for machines to do as well as people, say,
making Minute Physics videos, or consoling a friend? - Well, we've traditionally
thought of intelligence as something mysterious
that can only exist in biological
organisms, especially humans. But from the perspective
of modern physical science, intelligence is simply
a particular kind of information processing
and reacting, performed by particular arrangements of
elementary particles moving around. And there's no law
in physics that says it's impossible to do that
kind of information processing better than humans already do. It's not a stretch to say
that earthworms process information better than
rocks and humans better than earthworms. And in many areas, machines
are already better than humans. This suggests that
we've likely only seen the tip of the
intelligence iceberg, and that we're on track to
unlock the full intelligence that's latent in nature and use
it to help humanity flourish, or flounder. - So how do we keep
ourselves on the right side of the flourish-or-flounder
balance? What, if anything,
should we really be concerned about with
superintelligent AI? - Here is what has many top
AI researchers concerned. Not machines or
computers turning evil, but something more subtle-- superintelligence that simply
doesn't share our goals. If a heat-seeking missile
is homing in on you, you probably wouldn't think, no
need to worry, it's not evil. It's just following
its programming. No, what matters to you is what
the heat-seeking missile does and how well it does it,
not what it's feeling, or whether it has
feelings at all. The real worry isn't
malevolence, but competence. Superintelligent AI
is, by definition, very good at
attaining its goals. So the most important
thing for us to do is to ensure that its goals
are aligned with ours. As an analogy, humans are
more intelligent and competent than ants, and if we want to
build a hydroelectric dam where there happens to be
an anthill, there may be no malevolence
involved, but, well, too bad for the ants. Cats and dogs, on
the other hand, have done a great job
of aligning their goals with the goals of humans. I mean, even though
I'm a physicist, I can't help think kittens
are the cutest particle arrangements in our universe. If we build
superintelligence, we'd be better off in the position
of cats and dogs than ants. Or better yet,
we'll figure out how to ensure that AI
adopts our goals, rather than the other way around. - And when exactly
is superintelligence going to arrive? When do we need to
start panicking? - First of all, Henry,
superintelligence doesn't have to be
something negative. In fact, if we get it right,
AI might become the best thing ever to happen to humanity. Everything I love
about civilization is the product of intelligence,
so if AI amplifies our collective
intelligence enough to solve today's and
tomorrow's greatest problems, humanity might flourish
like never before. Second, most AI researchers
think superintelligence is at least decades away. But the research
needed to ensure that it remains
beneficial to humanity rather than harmful
might also take decades, so we need to start right away. For example, we'll
need to figure out how to ensure machines learn the
collective goals of humanity, adopt these goals
for themselves, and retain the goals as
they get ever smarter. And what about when
our goals disagree? Should we vote on what the
machines' goals should be? Should we do whatever
the president wants, whatever the creator of the
superintelligence wants, let the AI decide? In a very real way,
the question of how to live with superintelligence
is a question of what sort of future we want
to create for humanity, which obviously shouldn't just
be left to AI researchers, as caring and socially
skilled as we are. [END PLAYBACK] MAX TEGMARK: So that leads
to the very final point I want to make here today. To win this wisdom race,
creating an awesome future with AI, in addition to doing
these various things I've talked about, we really need to
think about what kind of future we want, what sort of
goal we want to have, where we want to
steer our technology. So just for fun, the survey
I mentioned that we did, we asked people also to say
what they wanted for the future. And I'll just share
with you here. These are from the analysis
I did last weekend. Most people out of
the 14,866 here, say they actually
want AI to go all the way to superintelligence. Although some are
saying no, here. A lot of people want
humans to be in control. Most people actually want
both humans and machines to be in control together. And a small fraction,
[INAUDIBLE],, prefer the machines
to be in control. [LAUGHTER] And then, when asked about
consciousness, a lot of people said, yeah, if
they have machines that are behaving as if they
are as intelligent as humans, they would like
to have them have a subjective experience also,
so the machines can feel good. But some people said,
nah, they prefer having zombie robots that don't
feel conscious, that people don't have to feel guilty about
switching them off or giving them boring things to do. In terms of what a future
civilization should strive for, there was a large majority
who felt we should either try to maximize
positive experiences, or minimize suffering,
or something like that. Then more people who said
let the future civilization pick whatever goals they want,
as long as it's reasonable. Some people said
they didn't even care about it if they thought
the goal that the future wanted was reasonable, even if it was
pointlessly banal, like maybe turning our universe
into paper clips. They were fine with just
delegating it to humans. But most people actually
felt that since we're creating this technology,
we have the right to have some say as to
where things should go. The broadest agreement of
all was on this question that, actually, maybe
we shouldn't just limit the future
of life to forever be stuck on this little planet,
but give it the potential to spread and flourish
throughout the cosmos. And to get people thinking
more about different futures, my wife, Meia,
likes to point out that even though
it's a good idea to visualize positive outcomes
when you plan your own career, and then try to figure
out how to get there, we kind of do the exact
opposite as a society. We just tend to think
about everything that could possibly go wrong
and then we freak out about it. When you watch
Hollywood movies, it's almost always dystopic
depictions of the future, right? So to get away from this
a little bit in my book, the whole chapter 5 is
theories of thought experiments with different future
scenarios, trying to span the whole range of
what people have talked about, and other, so you,
yourselves, can ask what you would actually prefer. And the most striking
thing from the survey was that people
disagree very strongly in what sort of society
they would like. And this is a
fascinating discussion that I would really encourage
you all to join into. I'm just going to end by
saying that I think when we look to the
future, there's really a lot to be excited about. People sometimes ask me, Max,
are you for AI or against AI? And I respond by asking
them, what about fire? Are you for it or against it? And of course, they'll
concede that they're for fire to heat their
homes in the winter and against fire for arson. But it's the same
with all technology, it's always a
double-edged sword. The difference with AI is
just it's much more powerful, so we need to put even more
effort into how we steer it. If you want life to exist for
beyond the next election cycle, and maybe, hopefully, for
billions of years on Earth and maybe beyond,
then just pressing pause on technology forever-- that's actually just
a really sucky idea. Because if we do
that, the question isn't whether humanity
is going to go extinct. The question is just,
what's going to wipe us out? Whether it's going to be the
next massive asteroid strike, like the one that took the dinos
out, or the next super volcano, or another one on a
list of long things that we know are going
to happen to Earth, that technology can create-- sorry, that technology
can prevent, but technology that
we don't have yet. It's going to require further
development of our tech. So I, for one, think that it
would be really foolish if we just run away from technology. I'm much more excited about,
in the Google spirit-- and I love your old
slogan, "Don't Be Evil"-- asking, what can we do
to steer, to develop [? theoretic ?]
technology in a direction so that life can
really flourish? Not just for the
next election cycle, but for a very, very
long time on Earth, and maybe even
throughout our cosmos. Thank you. [APPLAUSE] JOHN BRACAGLIA:
Thanks so much, Max. Now we have time for
questions from the audience. We have a mic over here, which
we can use for questions. And also, I can pass
this one around. And while we're doing that,
I'll pull up the Dory. MAX TEGMARK: Great. And since you mentioned there
were a lot of questions, make sure to keep
the questions brief, and make sure that they
actually are questions. AUDIENCE: AI risk seems to have
become a much more mainstream worry in the last few years. What changed to make that
happen and why didn't we do it earlier? MAX TEGMARK: I agree with you. I'm actually very, very happy
that it's changed in this way, and trying to help
make it change this way was the key reason
we founded the Future of Life Institute and organized
the Puerto Rico conference and the Asilomar
conference, and so on. Because we felt that up
until a few years ago, the debate was kind
of dysfunctional. And what I think has really,
really changed things for the better is that the
AI research community itself has really engaged, joined this
debate and started to own it. I think that's why it's become
more mainstream, and also much more sensible. JOHN BRACAGLIA: [INAUDIBLE] MAX TEGMARK: OK,
so you're the boss. Should we alternate with
online, offline questions? Do you want to
read the questions? JOHN BRACAGLIA: Oh, sure. "What would you most hope
to see a company like Google do to ensure safety as
we transition to a more AI-centric world?" MAX TEGMARK: So
as I said, I think Google already has the soul
to do exactly what's needed. This "Don't Be Evil" slogan
of Larry and Sergey-- I interpret it as though we
shouldn't just build technology because it's cool, but we
should think about its uses. For those of you who
know the old Tom Lehrer song about "Wernher von Braun,"
(SINGING IN A GERMAN ACCENT) Once the rockets go up, who
cares where they come down? That's not my department,
says Wernher von Braun. I view Google's
"Don't Be Evil" slogan as exactly the opposite
of that-- thinking mindfully about how to steer
the technology to be good. And I'm also really
excited again that Google is one of the founding partners
in the Partnership for AI, trying to make sure
that we-- that this happens not just in
what Google does, but throughout the community. And I also think
it's great if Google can pull all of its strings
to persuade politicians all around the world to
seriously fund AI safety research, because the sad
fact is, even though there's a great will for AI researchers
to do this stuff now, there's almost no
funding for it still. What Elon Musk helped us
give out 37 grants for is just a drop in the
bucket of what's needed. And it makes sense that Google
and other private companies want to own the IP on things
that make AI more powerful and build products out of it. But these same
private companies, it's better for them all
if nobody patents the way to make it safe, and keep
others from using it, right? That's something that's great
if it's developed openly by companies who share
it, or in universities, so that everybody can use
the same best practices and raise the quality
of safety everywhere. AUDIENCE: All right,
so Max, I actually talked to you last night
about a lot, like future-- really long, maybe 100 years-ish
what's going to happen. But if you look at AI
nowadays, not a lot of people are focusing on today,
just the imminent risks. So if you think about how
did Trump got elected, and how those things went
wrong in the last few years, you can't really deny
that AI has contributed a lot, especially in
the fake news, that AI's like suggested contents. So is that like focusing on
all energy into the future? So I felt there are
really few people that's looking into today. So do you think
that's a problem, or do you think we need
to do better on that? MAX TEGMARK: Yeah,
I think there's a really great opportunity for
us nerds in the tech community to educate the broader public
and politicians about the need to really engage with this. This is one of the reasons
I wanted to write this book. I think when I watched
the presidential debates for the last election, for
example, completely aside from the issues
they talked about, I thought it was just
absolutely astonishing what they didn't talk about. None of them talked
about AI at all. Hello? They're talking about jobs,
they're not mentioning AI. They're talking about
international security, they're not talking about AI,
like, the biggest technology out there. And I think in addition to
just telling politicians to pay attention, I think it's
incredibly valuable, also, if a bunch of people
from the tech community can actually go into
government positions to add more human-level
intelligence in government, to prevent the world governments
from being asleep at the wheel. AUDIENCE: I mean, actually-- MAX TEGMARK: Maybe
we should just-- JOHN BRACAGLIA: [INAUDIBLE] MAX TEGMARK: We can talk more
afterwards, but give everybody a chance to ask first. AUDIENCE: Hello? Hi. When you introduced
the concept-- when you introduced
the Asilomar Treaty, you mentioned the difference
between undirected intelligence and benevolent intelligence. Don't you think that
if humans succeeded in creating controllable,
benevolent intelligence, that they really have failed
in creating intelligence? Let me rephrase-- MAX TEGMARK: I'm not sure I
fully understood this question. Do you want to just
repeat the punch line? AUDIENCE: I'll rephrase. Do you think that benevolent
intelligence would be the intelligence that we
should strive towards, or should it be general
intelligence that perhaps cannot be controlled? MAX TEGMARK: So that's
a great question. You asked what I think. I am trying to be very
open-minded about what we actually want. And I wrote the book not-- really avoiding saying what
I think the future should be, because I think this is
such an important question, we just need everybody's
wisdom on it. And again, I talk about all
these different scenarios, some of them which correspond to
some of the different options you even listed there. And I'm incredibly
interested to hear what other people
think would actually be good with these things. One thing that Meia and
I found very striking when we discussed this was-- when I was writing the book,
was even though I tried quite hard to emphasize the
upsides of each scenario, there wasn't a single one there
that I didn't have at least some major misgivings about. JOHN BRACAGLIA: "Do you
think deep neural networks would be the way to get
to artificial general intelligence? If not, do you see fundamental
reasons why these do not have the potential for
recursive self-improvement that can speed up the
development of AGI or superintelligence?" MAX TEGMARK: All right,
that's a great question. So I think that although-- let me say two
things about this. First of all, our
brain seems to be, of course, some kind of a
recurrent neural network that's very, very complicated, and it
has human-level intelligence. But I think it
would be a mistake to think that that's
the only route there. I think it'd also be
a mistake to think, assume that that's the
fastest route there. Meia likes to
point out that even though, finally, a few years
ago, there was a beautiful TED Talk demonstrating the
first-ever successful mechanical bird, that came a
hundred years after the Wright brothers built airplanes. And when I flew here yesterday-- you'll be very
surprised to hear this, but I didn't come in
a mechanical bird. It turned out there was a
much easier, simpler way to build flying machines. And I think we're going to
find exactly the same thing with human-level
intelligent machines. The brain is just optimized
for very different things than what your machines
that you build are. The brain is-- Darwinian evolution is
obsessed about only building things that can self-assemble. Who cares if your laptop
can self-assemble? Evolution is obsessed
about creating things that can self-repair. It would be nice if your
laptop could self-repair, but it can't and you're
still using it, so. And also evolution doesn't care
about simplicity for humans to understand how it works,
but you care lot about that. So maybe this is
much more complicated than it needs to be, just
so it can self-assemble, and blah, blah, whatever. My guess is that the first
human-level AI will not be working exactly like the brain. That it will be something
much, much simpler, and maybe we'll use that to create later-- figure out how
human brains work. That said, the deep neural
networks are, of course, inspired by the brain and
are using some efforts-- some very clever
computational techniques that evolution came up with. My guess is that the fastest
route to human-level AI will actually use a combination
of deep neural networks with GOFAI-- various good old-fashioned
AI techniques, more logic-based
things, which have a lot of their own
strength for building, like for building a world
model and things like this. JOHN BRACAGLIA: Live question? MAX TEGMARK: Maybe I should just
add one more thing about this. Also this poses-- the increasing
successes of neural networks also poses a really
interesting challenge. Because when we put AI in charge
of more and more infrastructure in our world, it's
really important that it be reliable and robust. Raise your hand if your computer
has ever crashed on you. That wouldn't have been so
fun if it was the machine that was controlling your
self-driving car, or your local
nuclear power plant, or your nation's nuclear
weapons system, right? And so we need to transform
today's buggy and hackable computers into robust AI systems
that we can really trust. What is trust? Where does trust come from? It comes from understanding
how things work. And neural networks, I think,
are a double-edged sword. They are very powerful, but
we understand them much less than traditional software. So in my group at
MIT, actually, we're working very hard
right now on a project that I call intelligible
intelligence, where-- we're trying to come
up with algorithms where you can transform neural
networks into things which-- where you can really understand
better how they work. I think this is a challenge
that I would encourage you to all think about, too. How can you combine the
power of neural nets with stuff that you
can really understand better, and therefore trust? AUDIENCE: So should
we be afraid that AI will use its superintelligence
to figure it out that its treatment by the
humans is, essentially, slavery with just extra steps? MAX TEGMARK: That's a
wonderful, wonderful question. I haven't talked at all
about consciousness here, but the whole chapter 8
in the book is about that. And a lot of people say things
like, well, machines can never have a subjective
experience and feel anything at all, because
to feel something, you have to be made of cells,
or carbon atoms, or whatever. As a scientist, I really hate
this kind of carbon chauvinism. I'm made of the same kind
of up-quarks, down-quarks, and electrons as all
the computers are. Mine are just arranged in
a slightly different way. And it's obviously something
about the information processing that's all
that matters, right? And moreover, this kind of
self-justifying arguments have been used by people
throughout history to say, oh, it's OK
to torture slaves because they don't have souls. They don't feel anything. Oh, it's OK to torture chickens
today in giant factories because they don't
feel anything. And of course,
we're going to say that about our future
computers, too, because it's convenient for us. But that doesn't mean it's true. And I think it's actually a
really, really interesting question, to first figure
out, what is it exactly that makes an information
processing system have subjective experience? A lot of my colleagues,
whom I really respect, think this is just BS,
this whole question. This is what Daniel
Dennett says-- I looked up in the "MacMillan
Dictionary of Psychology" and it said that consciousness
is that nothing worth reading has ever been written on. But I really disagree with this. And actually, let me
just take one minute and explain why I think this
is actually a scientifically interesting question. So look at this. OK, and ask yourself,
why is it that when I show you 450
nanometer light on the left, and 650 nanometer
light on the right, why do you subjectively
experience it like this, [CLICK] and not like this? Why like this,
and not like this? I put to you that this is
a really fair-game science question that we simply don't
have an answer to right now. There's nothing to do
with wavelengths of light, or neurons, or anything
that explains this, but it's an observational fact. And I would like to understand,
why does it feel like anything, why do we have this experience? You might say,
well, look, we know that there are three
kinds of light sensors in a retina, the cones. And when I-- with a
450 nanometer light to activate one kind, and I
have the longer wavelength to activate the other
kind, and then you can see how they're
connected to various neurons in the back of your brain. But that just
sharpens the question, the mystery of
consciousness, because this proves that it had nothing
to do with light at all, because you can
experience colors even when you're dreaming, when
different neurons in your brain are active, when there is
no light involved, right? So my guess is that
consciousness-- by which I mean subjective
experience-- is simply the way information feels when
it's being processed in certain complex ways. And I think there
are some equations that we will one day
discover that specify what those complex ways are. And once we can
figure that out, it'll both be very useful because
we can put a consciousness detector in the
emergency room, and when an unresponsive patient
comes in, you can figure out if they have locked-in
syndrome, or not. And it will also
enable us to answer this really good question you
asked about whether machines should also be viewed
as moral entities that can have feelings. And above all, and I don't
see Ray Kurzweil here today, but if he can one day upload
himself into a Ray Kurzweil robot and live on for thousands
of years, and he talks like Ray and he looks like Ray
and he acts like Ray, you'll feel that
that's great for Ray. Now he's immortal. But suppose it turns out that
that machine is just a zombie and doesn't feel like
anything to be it, he would be pretty
bummed, wouldn't he? All right? And if in the future, life
spreads throughout our cosmos in some post-biological
form, and we're like, this is so exciting. Our descendants are doing
all these great things and we can die happy. If it turns out that they're all
just a bunch of zombies and all that cool stuff is just
a play for empty benches, wouldn't that suck? JOHN BRACAGLIA: I'll do
another question from the Dory. "What do you think is the most
effective way for individuals to embrace or promote a
security-engineering mentality, i.e., where not even one glitch
is tolerable, when working on AI-related projects?" MAX TEGMARK: Well,
first of all, I think we have a lot
to learn from existing successes in safety engineering,
that's why I started by showing the moon mission. It's not like this is
anything new to engineers. I think it's just that
we're so used to the idea that AI didn't
work, that we didn't need to worry about
the impact of things. And now it is beginning
to have an impact, so we should think it through. And then there are
also a few challenges which are really unique
and specific to AI. Some of the Asilomar
Principles talk about them and this Research Agenda
for AI Safety Research is a really long list of
specifics of safety engineering challenges that we need smart
people like you to work on. And I hope we can support that. AUDIENCE: So also on the
topic of security engineering, a lot of rockets blew up
on the way to the moon. MAX TEGMARK: Yeah. AUDIENCE: And given the
intelligence explosion, it's like we're
only going to have one chance to be able to get
the alignment problem correct. And I think we
couldn't even align on a set of values in this
room, let alone a system that would govern the
world effectively, because there's certainly
some drawbacks of capitalism. So I'm hopeful-- I am
glad that Elon is hedging our bets by making a magic
hat, but it seems like you and your group are focusing
on the alignment problem, and I'm just kind
of just curious what makes you optimistic that we're
going to be able to get it right on the first time? MAX TEGMARK: So first of all,
yeah, a lot of rockets blew up. But you will note that
most of the rockets that blew up, in fact, all the-- that's the rocket that blew
up in the moon mission had no people in them, right? So that was safety engineering. The high-risk stuff, they did
it in a controlled environment where the failure
didn't matter so much. So if you make some
really advanced AI, you want to understand
it really well, maybe don't connect it to the
internet the first time, right? So the downsides are small. There's a lot of things
like this that you can do. And I'm not saying that there's
one thing that we should particularly focus on, either. I think the community
has brainstormed up a really nice, long
list of things, and we should really
try to work on them all, and then we'll figure out some
more challenges along the way. But so the main thing we need to
do is just knowledge that yeah, this is valuable. Let's work on it. Then you asked also
why I'm optimistic. Let me just clarify. There are two kinds of optimism. There's naive optimism,
like my optimism that the sun is going to rise
over Mountain View tomorrow morning, regardless
of what we do. That's not the kind
of optimism I feel about the future of technology. Then there's the
kind of optimism that you're optimistic that
this can go well if we really, really plan and work for it. That's the kind of
optimism I feel here. We have in our hands to
create an awesome future, but so let's roll up
our sleeves and do it. AUDIENCE: Hey, Max. In the paper that you
wrote entitled "Why Does Cheap and Deep
Learning Work So Well?" with Lin, and now Rolnick, as
well, you ask a key question and you draw a
lot of connections between a deep learning
and then core parts of what we know about physics. Low polynomial order,
hierarchical processes, things like that. I'm just curious, what
are the reactions you've received both from
the physics community, and then from the AI community
to that attempt to kind of draw some deep parallels? MAX TEGMARK: Generally
quite positive feedback. And then also people
who have pointed out a lot of additional research
questions related to that, which are really worth doing. And just to bring
everybody up to speed as to what we're talking about,
so we don't devolve into just discussing a nerdy
research paper here, we were very intrigued
by the question of why deep learning works
so well, because if you think about it naively, even if
I just want to classify, take all the Google Images
that have cats and dogs, and I want to write-- and I want to take a
neural network that will take in, say,
1 million pixels and output the probability
that it's a cat, right? If you think about
it just a little bit, you might convince
yourself that it's impossible because how
many such images are there? Even if they're just
black-and-white images, each pixel can be
black or white, there's 2 to the power of 1
million possible images, which is much more images than there
are atoms in our universe. There's only 10
to the 78, right? And for each image, you have
to output a probability. So to specify an arbitrary
function of images, how many parameters
do you need for that? Well, 2 to the 1,000, which you
can't even fit if you store one parameter on each
atom in our cosmos. So how can it work so well? So the basic conclusion
we found there was that, of course, the
class of all functions that you can do well
with a neural network that you can actually run is
its almost infinitesimally tiny fraction of all functions. But then physics tells
us that the fraction of all functions that
we actually care about, because they're
relevant to our world, is also an almost
infinitesimally small fraction. And conveniently,
they're almost the same. I don't think this was luck. I think Darwinian
evolution gave us this particular kind of
neural network-based computer precisely because it's
really well-tuned for tapping into the kind of
computational needs that our cosmos has
dished out to us. And I'll be delighted
to chat more with you later about loose
ends to this, because I think there's a lot
more interesting stuff to be done on that. JOHN BRACAGLIA: Take
a Dory question. "Being humans in the age of AI
seems like an egocentric effort that gives an undeserved
special status to our species. Why should we even
bother to remain humans when we could get to
push our boundaries and see where we get?" MAX TEGMARK: All right. [LAUGHING] "An egocentric effort that gives
us undeserved special status to our species." Well, first of
all, you know, I'm totally fine with
pushing our boundaries and I've been advocating
for doing this. I mean, I find it very
annoying human hubris when we go on a soapbox and
we're like, (DEEPLY) we are the pinnacle of
creation and nothing can ever be smarter
than us, and we try now to build our whole self-worth
on somehow human exceptionalism. I think that's kind of lame. On the other hand-- we should probably make
this the last question. On the other hand,
egocentric efforts-- well, we are the only ones-- it's only us humans who are in
this conversation right now, and somebody needs to have it. So it's really up to us
to talk about it, right? We can't use this kind
of thinking as an excuse to just not talk
about it and just bumble into some completely
uncontrolled future. I think we should take a
firm grip on the rudder and steer in whatever direction
we decide to steer in. So let me thank you again
so much for coming out. It's a wonderful
pleasure to be here. [APPLAUSE] And If you have any more
questions you didn't get in, I'll be here signing books
and I'm happy to chat more. JOHN BRACAGLIA: Thank you all
for coming and thanks, Max, for talking at Google. MAX TEGMARK: And thank
you for having me.