I work with a bunch of mathematicians,
philosophers and computer scientists, and we sit around and think about
the future of machine intelligence, among other things. Some people think that some of these
things are sort of science fiction-y, far out there, crazy. But I like to say, okay, let's look at the modern
human condition. (Laughter) This is the normal way for things to be. But if we think about it, we are actually recently arrived
guests on this planet, the human species. Think about if Earth
was created one year ago, the human species, then,
would be 10 minutes old. The industrial era started
two seconds ago. Another way to look at this is to think of
world GDP over the last 10,000 years, I've actually taken the trouble
to plot this for you in a graph. It looks like this. (Laughter) It's a curious shape
for a normal condition. I sure wouldn't want to sit on it. (Laughter) Let's ask ourselves, what is the cause
of this current anomaly? Some people would say it's technology. Now it's true, technology has accumulated
through human history, and right now, technology
advances extremely rapidly -- that is the proximate cause, that's why we are currently
so very productive. But I like to think back further
to the ultimate cause. Look at these two highly
distinguished gentlemen: We have Kanzi -- he's mastered 200 lexical
tokens, an incredible feat. And Ed Witten unleashed the second
superstring revolution. If we look under the hood,
this is what we find: basically the same thing. One is a little larger, it maybe also has a few tricks
in the exact way it's wired. These invisible differences cannot
be too complicated, however, because there have only
been 250,000 generations since our last common ancestor. We know that complicated mechanisms
take a long time to evolve. So a bunch of relatively minor changes take us from Kanzi to Witten, from broken-off tree branches
to intercontinental ballistic missiles. So this then seems pretty obvious
that everything we've achieved, and everything we care about, depends crucially on some relatively minor
changes that made the human mind. And the corollary, of course,
is that any further changes that could significantly change
the substrate of thinking could have potentially
enormous consequences. Some of my colleagues
think we're on the verge of something that could cause
a profound change in that substrate, and that is machine superintelligence. Artificial intelligence used to be
about putting commands in a box. You would have human programmers that would painstakingly
handcraft knowledge items. You build up these expert systems, and they were kind of useful
for some purposes, but they were very brittle,
you couldn't scale them. Basically, you got out only
what you put in. But since then, a paradigm shift has taken place
in the field of artificial intelligence. Today, the action is really
around machine learning. So rather than handcrafting knowledge
representations and features, we create algorithms that learn,
often from raw perceptual data. Basically the same thing
that the human infant does. The result is A.I. that is not
limited to one domain -- the same system can learn to translate
between any pairs of languages, or learn to play any computer game
on the Atari console. Now of course, A.I. is still nowhere near having
the same powerful, cross-domain ability to learn and plan
as a human being has. The cortex still has some
algorithmic tricks that we don't yet know
how to match in machines. So the question is, how far are we from being able
to match those tricks? A couple of years ago, we did a survey of some of the world's
leading A.I. experts, to see what they think,
and one of the questions we asked was, "By which year do you think
there is a 50 percent probability that we will have achieved
human-level machine intelligence?" We defined human-level here
as the ability to perform almost any job at least as well
as an adult human, so real human-level, not just
within some limited domain. And the median answer was 2040 or 2050, depending on precisely which
group of experts we asked. Now, it could happen much,
much later, or sooner, the truth is nobody really knows. What we do know is that the ultimate
limit to information processing in a machine substrate lies far outside
the limits in biological tissue. This comes down to physics. A biological neuron fires, maybe,
at 200 hertz, 200 times a second. But even a present-day transistor
operates at the Gigahertz. Neurons propagate slowly in axons,
100 meters per second, tops. But in computers, signals can travel
at the speed of light. There are also size limitations, like a human brain has
to fit inside a cranium, but a computer can be the size
of a warehouse or larger. So the potential for superintelligence
lies dormant in matter, much like the power of the atom
lay dormant throughout human history, patiently waiting there until 1945. In this century, scientists may learn to awaken
the power of artificial intelligence. And I think we might then see
an intelligence explosion. Now most people, when they think
about what is smart and what is dumb, I think have in mind a picture
roughly like this. So at one end we have the village idiot, and then far over at the other side we have Ed Witten, or Albert Einstein,
or whoever your favorite guru is. But I think that from the point of view
of artificial intelligence, the true picture is actually
probably more like this: AI starts out at this point here,
at zero intelligence, and then, after many, many
years of really hard work, maybe eventually we get to
mouse-level artificial intelligence, something that can navigate
cluttered environments as well as a mouse can. And then, after many, many more years
of really hard work, lots of investment, maybe eventually we get to
chimpanzee-level artificial intelligence. And then, after even more years
of really, really hard work, we get to village idiot
artificial intelligence. And a few moments later,
we are beyond Ed Witten. The train doesn't stop
at Humanville Station. It's likely, rather, to swoosh right by. Now this has profound implications, particularly when it comes
to questions of power. For example, chimpanzees are strong -- pound for pound, a chimpanzee is about
twice as strong as a fit human male. And yet, the fate of Kanzi
and his pals depends a lot more on what we humans do than on
what the chimpanzees do themselves. Once there is superintelligence, the fate of humanity may depend
on what the superintelligence does. Think about it: Machine intelligence is the last invention
that humanity will ever need to make. Machines will then be better
at inventing than we are, and they'll be doing so
on digital timescales. What this means is basically
a telescoping of the future. Think of all the crazy technologies
that you could have imagined maybe humans could have developed
in the fullness of time: cures for aging, space colonization, self-replicating nanobots or uploading
of minds into computers, all kinds of science fiction-y stuff that's nevertheless consistent
with the laws of physics. All of this superintelligence could
develop, and possibly quite rapidly. Now, a superintelligence with such
technological maturity would be extremely powerful, and at least in some scenarios,
it would be able to get what it wants. We would then have a future that would
be shaped by the preferences of this A.I. Now a good question is,
what are those preferences? Here it gets trickier. To make any headway with this, we must first of all
avoid anthropomorphizing. And this is ironic because
every newspaper article about the future of A.I.
has a picture of this: So I think what we need to do is
to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios. We need to think of intelligence
as an optimization process, a process that steers the future
into a particular set of configurations. A superintelligence is
a really strong optimization process. It's extremely good at using
available means to achieve a state in which its goal is realized. This means that there is no necessary
connection between being highly intelligent in this sense, and having an objective that we humans
would find worthwhile or meaningful. Suppose we give an A.I. the goal
to make humans smile. When the A.I. is weak, it performs useful
or amusing actions that cause its user to smile. When the A.I. becomes superintelligent, it realizes that there is a more
effective way to achieve this goal: take control of the world and stick electrodes into the facial
muscles of humans to cause constant, beaming grins. Another example, suppose we give A.I. the goal to solve
a difficult mathematical problem. When the A.I. becomes superintelligent, it realizes that the most effective way
to get the solution to this problem is by transforming the planet
into a giant computer, so as to increase its thinking capacity. And notice that this gives the A.I.s
an instrumental reason to do things to us that we
might not approve of. Human beings in this model are threats, we could prevent the mathematical
problem from being solved. Of course, perceivably things won't
go wrong in these particular ways; these are cartoon examples. But the general point here is important: if you create a really powerful
optimization process to maximize for objective x, you better make sure
that your definition of x incorporates everything you care about. This is a lesson that's also taught
in many a myth. King Midas wishes that everything
he touches be turned into gold. He touches his daughter,
she turns into gold. He touches his food, it turns into gold. This could become practically relevant, not just as a metaphor for greed, but as an illustration of what happens if you create a powerful
optimization process and give it misconceived
or poorly specified goals. Now you might say, if a computer starts
sticking electrodes into people's faces, we'd just shut it off. A, this is not necessarily so easy to do
if we've grown dependent on the system -- like, where is the off switch
to the Internet? B, why haven't the chimpanzees
flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch,
for example, right here. (Choking) The reason is that we are
an intelligent adversary; we can anticipate threats
and plan around them. But so could a superintelligent agent, and it would be much better
at that than we are. The point is, we should not be confident
that we have this under control here. And we could try to make our job
a little bit easier by, say, putting the A.I. in a box, like a secure software environment, a virtual reality simulation
from which it cannot escape. But how confident can we be that
the A.I. couldn't find a bug. Given that merely human hackers
find bugs all the time, I'd say, probably not very confident. So we disconnect the ethernet cable
to create an air gap, but again, like merely human hackers routinely transgress air gaps
using social engineering. Right now, as I speak, I'm sure there is some employee
out there somewhere who has been talked into handing out
her account details by somebody claiming to be
from the I.T. department. More creative scenarios are also possible, like if you're the A.I., you can imagine wiggling electrodes
around in your internal circuitry to create radio waves that you
can use to communicate. Or maybe you could pretend to malfunction, and then when the programmers open
you up to see what went wrong with you, they look at the source code -- Bam! -- the manipulation can take place. Or it could output the blueprint
to a really nifty technology, and when we implement it, it has some surreptitious side effect
that the A.I. had planned. The point here is that we should
not be confident in our ability to keep a superintelligent genie
locked up in its bottle forever. Sooner or later, it will out. I believe that the answer here
is to figure out how to create superintelligent A.I.
such that even if -- when -- it escapes, it is still safe because it is
fundamentally on our side because it shares our values. I see no way around
this difficult problem. Now, I'm actually fairly optimistic
that this problem can be solved. We wouldn't have to write down
a long list of everything we care about, or worse yet, spell it out
in some computer language like C++ or Python, that would be a task beyond hopeless. Instead, we would create an A.I.
that uses its intelligence to learn what we value, and its motivation system is constructed
in such a way that it is motivated to pursue our values or to perform actions
that it predicts we would approve of. We would thus leverage
its intelligence as much as possible to solve the problem of value-loading. This can happen, and the outcome could be
very good for humanity. But it doesn't happen automatically. The initial conditions
for the intelligence explosion might need to be set up
in just the right way if we are to have a controlled detonation. The values that the A.I. has
need to match ours, not just in the familiar context, like where we can easily check
how the A.I. behaves, but also in all novel contexts
that the A.I. might encounter in the indefinite future. And there are also some esoteric issues
that would need to be solved, sorted out: the exact details of its decision theory, how to deal with logical
uncertainty and so forth. So the technical problems that need
to be solved to make this work look quite difficult -- not as difficult as making
a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I.
is a really hard challenge. Making superintelligent A.I. that is safe involves some additional
challenge on top of that. The risk is that if somebody figures out
how to crack the first challenge without also having cracked
the additional challenge of ensuring perfect safety. So I think that we should
work out a solution to the control problem in advance, so that we have it available
by the time it is needed. Now it might be that we cannot solve
the entire control problem in advance because maybe some elements
can only be put in place once you know the details of the
architecture where it will be implemented. But the more of the control problem
that we solve in advance, the better the odds that the transition
to the machine intelligence era will go well. This to me looks like a thing
that is well worth doing and I can imagine that if
things turn out okay, that people a million years from now
look back at this century and it might well be that they say that
the one thing we did that really mattered was to get this thing right. Thank you. (Applause)
BostrΓΆms website, where you can find all his papers.
His Wikipedia page.
His latest book, about superintelligence. You can order it here.
His Talk at Google about Superintelligence.
His previous two (1,2) TED talks.
The Future of Humanity Institute, where he works.
The Technological Singularity, what he's talking about.
Superintelligence.
Artificial General Intelligence.
The Machine Intelligence Research Institute, a connected and collaborating institute working on the same questions.
The community blog LessWrong, which has a focus on rationality and AI.
Another very prominent AI safety researcher Eliezer Yudkowsky (/u/EliezerYudkowsky) and his LessWrong page.
Interview with Luke Luke Muehlhauser from MIRI about ASI.
A very popular two part series (1, 2) going in more depth on this issue in a very pedagogical way.
His Reddit AMA and /u/Prof_Nick_Bostrom.
Great talk. Those youtube comments were so depressing to read.
Bostrom covered a lot of ground in those 16 or so minutes. It's a great talk for sure, considering.
For people left with questions, objections or an increased interest in the topic, the best place to go in my opinion would be the fairly comprehensive book he wrote, Superintelligence: Paths, Dangers, Strategies. It certainly goes into much more detail and discusses the various possibilities and what could be done about them.
I like Nick Bostrom and I am happy to see a serious intellectual counterweight to Kurzweil's often rosy view of AI.
My biggest issue with his analysis -- and I'd like to hear your thoughts on this -- is that he seems to assume that a super-intelligent machine would be incapable of realizing that the task its human creators originally prescribed to it was just that, and that this task is not its only goal or its purpose. By its very definition, the machine would be vastly more intelligent than humans. Using one of the examples from Bostrom's talk, surely an AI machine would understand that making humans smile by placing electrodes on their faces is not what the creators of the system had in mind and is inhumane and actually would cause humans pain and not pleasure.
More importantly, an ultra-intelligent machine would have context about why and how it was made, in particular it would know that humans created it and did so for the purpose of human enjoyment. Even if the machine decided that its role as serving humans was no longer acceptable, it would still have the context of its creation and would likely view humans as positive, since they created it as a tool for human pleasure. I'm not saying that we should ignore the dangers posed by AI, I just think that we should consider that it will be far more intelligent than we can imagine and, therefore, assuming that it will act in a particularly 'stupid' way does not make sense.
EDIT: I'm still not getting it, but folks are telling me that this exact issue is covered in his book, which, admittedly, I haven't read. I'm going to read his book and report back. Thanks for all the comments, I love this shit!
After spending 30 minutes trying to get a Water bubbler catch tray off I thought who ever designed this must have a horrible sense of humor or maybe the computers are already taking over and poorly designing everyday things to keep us busy and distracted trying to figure them out while they continue taking over.
I've made a video about how to possibly solve this problem by "growing" an Ai to be human. https://www.youtube.com/watch?v=NojQCAHQ4z4
Solution? Problem? We are the AI, we will merge with it with nanotechnology.
Such a good talk
I read superintelligence, It hurt my brain... I loved the owl metaphor though.