JOHN BRACAGLIA: My
name is John Bacaglia. And I'm a Googler working
in YouTube operations. I also lead a group called
the Singularity Network, an internal organization
focused on discussions and rationality in
artificial intelligence. I'm pleased to be here
today with Mr. John Searle. As a brief introduction,
John Searle is the Slusser Professor of
Philosophy at the University of California-Berkeley. He is widely noted
for his contributions to the philosophy of
language, philosophy of mind, and social philosophy. John has received the Jean Nicod
Prize, the National Humanities Medal in the Mind and
Brain prize for his work. Among his noble concepts is
the Chinese room argument against strong
artificial intelligence. John Searle, everyone. [APPLAUSE] JOHN SEARLE: Thank
you Thank you. Many thanks. It's great to be back at Google. It is a university
outside of a university. And sometimes, I think,
this is what a university ought really to look like. Anyway, it's just
terrific to be here. And I'm going to talk
about some-- well, I'm going to talk about
a whole lot of stuff. But, basically, I want
to start with talking about the significance of
technological advances. And America, especially,
but everybody, really, is inclined to just
celebrate the advances. If they got a self-driving
car who the hell cares about whether
or not it's conscious. But I'm going to say there
are a lot of things that matter for certain purposes
about the understanding of the technology. And that's really what
I'm going to talk about. Now to begin with, I have
to make a couple rather boring distinctions
because you won't really understand contemporary
intellectual life if you don't understand these distinctions. In our culture, there's a
big deal about objectivity and subjectivity. We strive for an
objective science. The problem is that these
notions are systematically ambiguous in a way that produces
intellectual catastrophes. They're ambiguous between a
sense, which is epistemic, where epistemic means having
to do with knowledge-- epistemic-- and a sense,
which is ontological, where ontological means
having to do with existence. I hate using a lot of
fancy polysyllabic words. And I'll try to keep
them to a minimum. But I need these two,
epistemic and ontological. Now the problem with
objectivity and subjectivity is that they're
systematically ambiguous-- I'll just abbreviate
subjectivity-- between an epistemic sense
and an ontological sense. Epistemically, the
distinction is between types of knowledge claims. If I say, Rembrandt
died in 1606, well-- no, he didn't die then. He was born then. I'd say Rembrandt
was born in 1606. That is to say, it's a
matter of objective fact. That's epistemically objective. But if I say Rembrandt
is the greatest painter that ever lived, well,
that's a matter of opinion. That is epistemically subject. So we have epistemic
objectivity and subjectivity. Underlying that is a distinction
in modes of existence. Lots of things exist regardless
of what anybody thinks. Mountains, molecules,
and tectonic plates have a mode of existence that
is ontologically objective. But pains and
pickles and itches, they only exist insofar as they
are experienced by a subject. They are ontologically
subjective. So I want everybody to
get that distinction because it's very
important because-- well, for a lot of reasons, but
one is lots of phenomena that are ontologically subjective
admit of an account which is epistemically objective. I first got interested
in this kind of stuff. I thought, well, why don't
these brain guys solve the problem of consciousness. And I went over UCSF to
their neurobiology gang and told them,
why the hell don't you guys figure out how the
brain causes consciousness? What am I paying you to do? And their reaction was,
look, we're doing science. Science is objective. And you, yourself, admit that
consciousness is subjective. So there can't be a
science of consciousness. Now you'll all recognize
that's a fallacy of ambiguity. Science is indeed
epistemically objective because we strive
for claims that can be established
as true or false, independent of the
attitudes of the makers and interpreters of the claim. But epistemic
objectivity of the theory does not preclude an
epistemically objective account of a domain that's
ontologically subjective. I promised you I wouldn't
use too many big words, but anyway there are a few. The point is this. You can have an epistemically
objective science of consciousness, even
though consciousness is ontologically subjective. Now that's going
to be important. And there's another distinction. Since not everybody
can see this, I'm going to erase
as I go along. There's another distinction
which is crucial. And that's between phenomena
that are observer-independent. And there I'm thinking of
mountains and molecules and tectonic plates, how
they exist regardless of what anybody thinks. But the world is
full of stuff that matters to us that
is observer-relative. It only exists relative
to observers and users. So, for example, the piece of
paper in my wallet is money. But the fact that makes it money
is not a fact of its chemistry. It's a fact about the attitudes
that we have toward it. So money is observer-relative. Money, property, government,
marriage, universities, Google, cocktail parties,
and summer vacations are all observer-relative. And that has to be distinguished
from observer-independent. And notice now, all
observer-relative phenomenon are created by
human consciousness. Hence, they contain an element
of ontological subjectivity. But you already know
that you can have, in some cases, an
epistemically objective science of a domain that
is observer-relative. That's why you can have an
objective science of economics even though the phenomena
studied by economics is, in general,
observer-relative, and hence contains an element
of ontological subjectivity. Economists tend to forget that. They tend to think
that economics is kind of like physics,
only it's harder. When I studied economics,
I was appalled. We learned that marginal
cost equals marginal revenue in the same tone of
voice that in physics we learned that force equals
mass times acceleration. They're totally different
because the stuff in economics is all observer-relative
and contains an element of ontological subjectivity. And when the subjectivity
changes-- ffft-- the whole thing collapses. That was discovered in 2008. This is not a lecture
about economics. I want you to keep
all that in mind. Now that's important because
a lot of the phenomena that are studied in
cognitive science, particularly phenomena of
intelligence, cognition, memory, thought, perception,
and all the rest of it have two different senses. They have one sense, which
is observer-independent, and another sense, which
is observer-relative. And, consequently, we
have to be very careful that we don't confuse
those senses because many of the crucial concepts
in cognitive science have as their
reference phenomena that are observer-relative
and not observer-independent. I'm going to get to that. OK, everybody up with us so far? I want everything to
sound so obvious you think, why does this guy bore
us with these platitudes? Why doesn't he say
something controversial? Now I'm going to
go and talk about some intellectual history. Many years ago, before
any of you were born, a new discipline was born. It was called cognitive science. And it was founded by
a whole bunch of us who got sick of behaviorism
in psychology, effectively. That was the reason for it. And the Sloan Foundation used
to fly us around to lecture, mostly to each other. But anyway, that's all right. We were called Sloan Rangers. And I was invited to lecture to
the Artificial Intelligence Lab at Yale. And I thought, well, christ,
I don't know anything about artificial intelligence. So I went out and bought a book
written by the guys at Yale. And I remember thinking,
$16.95 plus tax-- money wasted. But it turned out I was wrong. They had in there a theory about
how computers could understand. And the idea was that you
give the computer a story. And then you ask the computer
questions about the story. And the computer would give the
correct answer to the questions even though the answer was
not contained in the story. A typical story. A guy goes into a restaurant
and orders a hamburger. When they brought
him the hamburger, it was burned to a crisp. The guy stormed out
of the restaurant and didn't even pay his bill. Question, did the guy
eat the hamburger? Well, all of you computers
know the answer to that. No, the guy didn't
eat the hamburger. And I won't tell you the
story where the answer is yes. It's equally boring. Now, the point was this proves
that the computer really understands the story. So there I was on my way to
New Haven on United Airlines at 30,000 feet. And I thought, well,
hell, they could give me these stories in Chinese. And I could follow the computer
program for answering stories. And I don't understand
a word of the story. And I thought, well,
that's an objection they must have thought of. And besides that
won't keep me going for a whole week in New Haven. Well, it turned out they
hadn't thought of it. And everybody was
convinced I was wrong. But interestingly they
all had different reasons for thinking I was wrong. And the argument has gone
on longer than a week. It's gone on for 35 years. I mean, how often do I
have to refute these guys? But anyway, let's go through it. The way the argument goes
in its simplest version is I am locked in a room
full of Chinese-- well, they're boxes full of Chinese
symbols and a rule book in English for
manipulating the symbols. Unknown to me, the boxes
are called a database, and the rule book
is called a program. In coming in the room,
I get Chinese symbols. Unknown to me,
those are questions. I look up what I'm
supposed to do. And after I shuffle
a lot of symbols, I give back other symbols. And those are answers
to the questions. Now we will suppose--
I hope your bored with this, because I am. I mean, I've told
this story many times. We will suppose that they get
so good at writing the program, I get so good at
shuffling the symbols, that my answers are
indistinguishable from a native Chinese speaker. I pass the Turing test
for understanding Chinese. All the same, I don't
understand a word of Chinese. And there's no way
in the Chinese room that I could come to understand
Chinese because all I am is a computer system. And the rules I operate
are a computer program. And-- and this is
the important point-- the program is
purely syntactical. It is defined entirely
as a set of operations over syntactical elements. To put it slightly
more technically, the notion same
implemented program defines an
equivalence class that is specified completely
independently of any physics and, in particular,
independent of the physics of its realization. The bottom line is
if I don't understand the questions and the answers
on the basis of implementing the program, then neither does
any other digital computer on that basis
because no computer has anything that I don't have. Computers are purely
syntactical devices. Their operations are
defined syntactically. And human intelligence
requires more than syntax. It requires a semantics. It requires an understanding
of what's going on. You can see this if you
contrast my behavior in English with my behavior in Chinese. They ask me
questions in English. And I give answers in English. They say, what's the longest
river in the United States? And I say, well,
it's the Mississippi, or the Mississippi-Missouri,
depending on if you count
that as one river. They ask me in Chinese, what's
the longest river in China? I don't know what the
question is or what it means. All I got are Chinese symbols. But I look up what I'm supposed
to do with that symbol, and I give back an answer,
which is the right answer. It says, it's the Yangtze. That's the longest
river in China. I don't know any of that. I'm just a computer. So the bottom line is that the
implemented computer program by itself is never
going to be sufficient for human understanding
because human understanding has more than syntax. It has a semantics. There are two
fundamental principles that underlie the
Chinese room argument. And both of them seem
to me obviously true. You can state each
in four words. Syntax is not semantics. And simulation is
not duplication. You can simulate--
you're going to have plenty of time for questions. How much time we
got, by the way? I want to-- JOHN BRACAGLIA: We'll leave
time for questions at the end. JOHN SEARLE: I
want everybody that has a question to have a
chance to ask the question. Anyway, that's the famous
Chinese room argument. And it takes about five
minutes to explain it. Now you'd be amazed at
the responses I got. They were absolutely
breathtaking in their preposterousness. Now let me give
you some answers. A favorite answer was this. You were there in a room. You had all those symbols. You had a box. You probably had scratch
paper on which to work. Now, it wasn't you
that understood. You're just a CPU, they would
say with contempt, the Central Processing Unit. I didn't know what any of these
words meant in those days. CPU, it's the system
that understands. And when I first
heard this, I mean, the room understands
Chinese, I said to the guy. And he said, yes, the
room understands Chinese. Well, it's a desperate answer. And I admire courage. But it's got a problem. And that is the reason
I don't understand is I can't get from the
syntax to the semantics. But the room can't either. How does the room get from
the syntax of the computer program of the input
symbols to the semantics of the understanding
of the symbols? There's no way the
room can get there because that would
require some consciousness in the room in addition
to my consciousness. And there is no
such consciousness. Anyway, that was
one of many answers. One of my favorites was this. This was in a public debate. A guy said to me, but
suppose we ask you, do you understand Chinese? And suppose you say, yes,
I understand Chinese. Well? Well, OK, let's try that
and see how far we get. I get a question
that looks like this. Now, this will be in a
dialect of Chinese some of you won't recognize. Unknown to me,
that symbol means, do you understand Chinese? I look up what I'm
supposed to do. And I give them
back a symbol that's in the same dialect of Chinese. And it looks like that. And that says, why do you guys
ask me such dumb questions? Can't you see that I
understand Chinese? I could go on with the other
responses and objections, but I think they're
all equally feeble. The bottom line is
there's a logical truth. And that is that the
implemented computer program is defined syntactically. And that's not a weakness. That's the power. The power of the syntactical
definition of computation is you can implement it on
electronic machines that can perform literally
millions of computations in a very small amount of time. I'm not sure I believe
this, but it always says it in the
textbooks, that Deep Blue can do 250 million
computations in a second. OK, I take their word for it. So it's not a
weakness of computers. Now, another argument
I sometimes got was, well, in
programs, we often have a section called the semantics
of natural understanding programs. And that's right. But, of course, what they do
is they put in more computer implementation. They put in more syntax. Now, so far, so good. And I think if that's
all there was to say, I've said all of that before. But now I want to go on
to something much more interesting. And here goes with that. Now how we doing? I'm not-- everybody
seems to understand there's going to be plenty
of time for questions. I insist on a good
question period. So let me take a
drink of water, and we go to the next step, which
I think is more important. A lot of people
thought, well, look, maybe the computer doesn't
understand Chinese, but all the same, it does
information processing. And it does, after
all, do computation. That's what we define
the machine to do. And I had to review a
couple of books recently. One book said that
we live in a new age, the age of information. And in a wonderful
outburst, the author said everything is information. Now that ought to worry us
if everything is information. And I read another book. This was an optimistic book. I reviewed-- this for "The
New York Review of Books"-- a less optimistic book by a
guy who said computers are now so smart they're almost
as smart as we are. And pretty soon, they'll
be just as smart as we are. And then I don't have to tell
this audience the next step. They'll be much
smarter than we are. And then look out
because they might get sick of being oppressed by us. And they might simply rise
up and overthrow us all. And this, the author
said modestly-- I guess this is how
you sell books-- he said this may be the
greatest challenge that humanity has ever faced,
the upcoming revolt of super-smart computers. Now, I want to say both
of these claims are silly. I mean, I'm speaking
shorthand here. There'll be plenty of
chance to answer me. And I want to say briefly why. The notion of intelligence
has two different senses. It has an
observer-independent sense where it identifies something
that is psychologically real. So I am more intelligent
than my dog Tarski. Now, Tarski's pretty
smart, I agree. But overall, I'm
smarter than Tarski. I've had four
dogs, by they way-- Frege, Russell,
Ludwig, and Tarski. And Tarski, he's a
Bernese mountain dog. I'm sorry I didn't
bring him along, but he's too big for the car. Now, he's very smart. But he does have intelligence
in the same sense that I do. Only he happens to have
somewhat less than I do. Now, my computer is
also intelligent. And it also processes
information. But-- and this is the key
point-- it's observer-relative. The only sense in which the
computer has intelligence is not in an intrinsic, but it's
in an observer-relative sense. We can interpret its
operations in such a way that we can make-- now, watch
this terminology-- we can make epistemically objective
claims of intelligence even though the
intelligence in question is entirely in the
eye of the beholder. This was brought
home forcefully to me when I read in the newspapers
that IBM had designed a computer program which could
beat the world's leading chess player. And in the same sense in
which Kasparov beat Karpov so we were told Deep
Blue beat Kasparov. Now that ought to worry us
because for Karpov and Kasparov to play chess, they both
have to be conscious that they're playing chess. They both have to
know such things as I opened with
pawn to king four, and my queen is threatened
on the left-hand side of the board. But now notice, Deep
Blue knows none on that because it doesn't
know anything. You can make epistemically
objective claims about Deep Blue. It made such and such a move. But the attributions of
intelligent chess playing, this move or that move,
it's all observer-relative. None of it is intrinsic. In the intrinsic sense in
which I have more intelligence than my dog, my computer has
zero intelligence-- absolutely none at all. It's a very complex
electronic circuit that we have designed to
behave as if it were thinking, as if it were intelligent. But in the strict sense, in
the observer-independent sense in which you and I
have intelligence, there is zero intelligence
in the computer. It's all observer-relative. And what goes for
intelligence goes for all of the key notions
in cognitive science. The notions of intelligence,
memory, perception, decision-making,
rationality-- all those have two different
senses, a sense where they identify
psychologically real phenomena of the sort that goes on
in you and me and the sort where they identify
observer-relative phenomena. But in the intrinsic
sense in which you and I have intelligence, the
machinery we're talking about has zero intelligence. It's no question of its
having more or less. It's not in the same
line of business. All of the intelligence is
in the eye of the beholder. It's all observer-relative. Now, you might say-- and I would
say-- so, for most purposes, it makes no difference at all. I mean, if you can design a
car that can drive itself, who cares if it's
conscious or not? Who cares if it literally
has any intelligence? And I agree. For most purposes,
it doesn't matter. For practical purposes, it
doesn't matter whether or not you have the
observer-independent or the observer-relative sense. The only point where
it matters, if you think there's some psychological
significance to the attribution of intelligence
to machinery which has no intrinsic intelligence. Now, notice the
intelligence by which we-- the mental processes by which
we attribute intelligence to the computer
require consciousness. So the attribution of
observer-relativity is done by conscious agents. But the consciousness is not
itself observer-relative. The consciousness that creates
the observer-relative phenomena is not itself observer-relative. But now let's get to
the crunch line then. If information is
systematically ambiguous between an intrinsic sense,
in which you and I have information, and an
observer-relative sense, in which the computer
has information, what about computation? After all, computation,
that must surely be intrinsic to the computer. That's what we designed and
built the damn things to do, was computation. But, of course, the same
distinction applies. And I want to take
a drink of water and think about
history for a moment. When I first read
Alan Turing's article, it was called "Computing
Machinery and Intelligence." Now why didn't he call it
"Computers and Intelligence"? Well, you all know the answer. In those days, "computer"
meant "person who computes." A computer is like a
runner or a piano player. It's some human who
does the operation. Nowadays nobody would
think that because the word has changed its meaning. Or, rather, it's acquired
the systematic ambiguity between the
observer-relative sense and the
observer-independent sense. Now we think that
a computer names a type of machinery, not a human
being who actually carries out computation. But the same distinction
that we've been applying, the same distinction
that we discovered in all these other
cases, that applies to computation in the literal
or observer-independent sense in which I will now
do a simple computation. I will do a computation
using the addition function. And here's how it goes. It's not a very big deal. One plus one equals two. Now, the sense in which I
carried out a computation is absolutely intrinsic
and observer-independent. I don't care what
anybody says about me. If the experts say, well,
you weren't really computing. No, I was. I consciously did a computation. When my pocket calculator
does the same operation, the operation is entirely
observer-relative. Intrinsically all that goes on
is a set of electronic state transitions that we
have designed so that we can interpret computationally. And, again, to repeat, for most
purposes, it doesn't matter. When it matters is
when people say, well, we've created this race
of mechanical intelligences. And they might rise
up and overthrow us. Or they attribute some
other equally implausible psychological interpretation
to the machinery. In commercial computers,
the computation is observer-relative. Now notice, you all
know that doesn't mean it's epistemically subjective. And I pay a lot of
money so that Apple will make a piece of machinery
that will implement programs that my earlier computers
were not intelligent enough to implement. Notice the observer-relative
attribution of intelligence here. So it's absolutely
harmless unless you think there's some
psychological significance. Now what is lacking, of
course, in the machinery, which we have in human
beings which makes the difference
between the observer relativity of the computation
in the commercial computer and the intrinsic or observer
independent computation that I have just performed on
the blackboard, what's lacking is consciousness. All observer-relative
phenomena are created by human and
animal consciousness. But the human and
animal consciousness that creates them is not
itself observer-relative. So there's an intrinsic mental
phenomena, the consciousness of the agent, which creates the
observer-relative phenomena, or interprets the mechanical
system in an observer relative fashion. But the consciousness that
creates observer relativity is not itself observer-relative. It's intrinsic. Now, I wanted to save plenty
of time for discussion. So let me catch my
breath and then give a kind of summary of
the main thrust of what I've been arguing. And one of things I
haven't emphasized but I want to
emphasize now, and that is most of the apparatus,
the conceptual apparatus, we have for discussing these
issues is totally obsolete. The difference between the
mental and the physical, the difference between the
social and the individual, and the distinction between
those features which can be identified in an
observer-relative fashion, such as computation,
and those which can be identified in an
observer-independent fashion, such as computation. We're confused by the
vocabulary which doesn't make the matters sufficiently clear. And I'm going to
end this discussion by going through some of the
elements of the vocabulary. Now, let me have a drink of
water and catch my breath. Let's start with
that old question, could a machine think? Well, I said the
vocabulary was obsolete. And the vocabulary of
humans and machines is already obsolete
because if by machine is meant a physical system
capable of performing certain functions, then
we're all machines. I'm a machine. You're a machine. And my guess is only
machines could think. Why? Well that's the next step. Thinking is a biological
process created in the brain by certain quite complex,
but insufficiently understood neurobiological processes. So in order to think,
you've got to have a brain, or you've got to have something
with equivalent causal powers to the brain. We might figure out a way to
do it in some other medium. We don't know enough about
how the brain does it. So we don't know how to
create it artificially. So could a machine think? Human beings are machines. Yes, but could you make
an artificial machine that could think? Why not? It's like an artificial heart. The question, can you build
an artificial brain that can think, is like
the question, can you build an artificial
heart that pumps blood. We know how the
heart does it, so we know how to do it artificially. We don't know how the brain
does it, so we have no idea. Let me repeat this. We have no idea how
to create a thinking machine because we don't
know how the brain does it. All we can do is
a simulation using some sort of formal system. But that's not the real thing. You don't create
thinking that way, whereas the artificial heart
really does pump blood. So we had two questions. Could a machine think? And could an artificially-made
machine think? Answer the question
one is obviously yes. Answer to question two
is, we don't know yet, but there's no
obstacle in principle. Does everybody see that? Building an artificial
brain is like building an artificial heart. The only thing is no
one has begun to try it. They haven't begun to try it
because they have no idea how the actual brain does it. So they don't know how
to imitate actual brains. Well, OK, but could you build
an artificial brain that could think out of some
completely different materials, out of something
that had nothing to do with nucleo-proteins,
had nothing to with neurons and
neurotransmitters and all the rest of it. And the answer is,
again, we don't know. That seems to me
an open question. If we knew how the
brain did it, we might be able to
define-- I mean, be able to design machines
that could do it using some completely different
biochemistry in a way that the artificial
heart doesn't use muscle tissue to pump blood. You don't need muscle
tissue to pump blood. And maybe you don't need brain
tissue to create consciousness. We just are ignorant. But notice there's no
obstacle in principle. The problem is no one has begun
to think about how you would build a thinking machine,
how you'd build a thinking machine out of some
material other than neurons because they haven't begun
to think about how we might duplicate and not
merely simulate what the brain actually does. So the question,
could a machine think, could an artificial
machine think, could an artificial
machine made out of some completely
different materials, could those machines think? And now the next question
is the obvious one. Well, how about a computer? Could a computer think? Now, you have to
be careful here. Because if a computer is defined
as anything that can carry out computations, well, I just did. This is a computation. So I'm a computer. And so are all of you. Any conscious agent
capable of carrying out that simple computation is
capable, is both a computer and capable of thinking. So my guess is-- and
I didn't have a chance to develop this idea-- is
that not only can computers think-- you and
me-- but my guess is that anything
capable of thinking would have to be
capable of carrying out simple computations. But now what is the
status of computation? Well, the key element here is
the one I've already mentioned. Computation has two senses,
an observer-independent sense and an observer-relative sense. In the observer-relative
sense, anything is a computer if you can ascribe
a computational interpretation. Watch. I'll show you a very
simple computer. That computer just computed
a well-known function. s equals one-half gt squared. And if you had a
good-enough watch, you could actually
time and figure out how far the damn thing fell. Everybody sees. It's elementary mathematics. So if this is a
computer, then anything is a computer because
being a computer in the observer-relative
sense is not an intrinsic feature
of an object, but a feature of our
interpretation of the physics of the phenomenon. In the old Chinese
room days, when I had to debate these guys,
at one point, I'd take my pen, slam it on a table, and say
that is a digital computer. It just happens to have a
boring computer program. The program says stay there. The point is nobody
ever called me on this because it's obviously right. It satisfies a
textbook definition. You know, in the
early days, they tried to snow me with a whole
lot of technical razzmatazz. "You've left out the distinction
between the virtual machine and the non-virtual
machine" or "you've left out the transducers." You see, I didn't know what
the hell a transducer was, a virtual machine. But it takes about five
minutes to learn those things. Anyway, so now we get to the
crucial question in this. If computers can think,
man-made computers can think, machines can
think, what about computation? Does computation name a
machine, a thinking process? That is, is computation,
as defined by Alan Turing, is that itself
sufficient for thinking? And you now know
the answer to that. In the observer-relative
sense, the answer is no. Computation is not
a fact of nature. It's a fact of our
interpretation. And insofar as we can create
artificial machines that carry out computations,
the computation by itself is never going to be
sufficient for thinking or any other cognitive process
because the computation is defined purely formally
or syntactically. Turing machines are not
to be found in nature. They're to be found in our
interpretations of nature. Now, let me add, a lot
of people think, ah, this debate has something
to do with technology or there'll be
advances in technology. I think that
technology's wonderful. And I welcome it. And I see no limits to the
possibilities of technology. My aim is this talk is
simply to get across, you shouldn't misunderstand the
philosophical, psychological, and, indeed, scientific
implication of the technology. Thank you very much. [APPLAUSE] JOHN BRACAGLIA: Thank you, John. JOHN SEARLE: I'm
sorry I talk so fast, but I want to leave plenty
of time for questions. JOHN BRACAGLIA: We'll start
with one question from Mr. Ray Kurzweil. RAY KURZWEIL: Is this on? [INTERPOSING VOICES] RAY KURZWEIL:
Well, thanks, John. I'm one of those guys you've
been debating this issue for 18 years, I think. And I would praise the
Chinese room for its longevity because it does really get
at the apparent absurdity that some deterministic
process like computation could possibly be responsible
for something like thinking. And you point out
the distinction of thinking between its effects
and the subjective states, which is a synonym
for consciousness. So I quoted you here in my
book "Singularity is Near," at the equivalence of neurons
and even brains with machines. So then I took your argument
why a machine and a computer could not truly understand
what it's doing and simply substituted human
brain for computers, since you said they
were equivalent, and neurotransmitter
concentrations and related mechanisms for formal
symbols, since basically neurotransmitter
concentrations, it's just a mechanistic concept. And so you wrote, with
those substitutions, the human brain
succeeds by manipulating neurotransmitter concentrations
and other related mechanisms. The neurotransmitter
concentrations and related
mechanisms themselves are quite meaningless. They only have the meaning
we have attached to them. The human brain knows
nothing of this. It just shuffles the
neurotransmitter concentrations and related mechanisms. Therefore, the human brain
cannot have true understanding. So-- [LAUGHTER] JOHN SEARLE: There's something
interesting variations, again, on my original. RAY KURZWEIL: But the
point I'd like to make, and that I'd be interested
in your addressing, is the nature of
consciousness because, I mean, you said today, and you
wrote, the essential thing is to recognize
that consciousness is a biological processes
like digestion, lactation, photosynthesis, or mitosis. We know that brains
cause consciousness with specific
biological mechanisms. But how do we know that
a brain is conscious? How do you know
that I'm conscious? And how do you-- JOHN SEARLE: [INAUDIBLE] RAY KURZWEIL: And how do we know
if a computer was conscious? We don't have a
computer today that seems conscious, that's
convincing in its responses. But my prediction is we will. We can argue about
the time frame. And when we do, how do we
know if it's conscious of it just seems conscious? How do we measure that? JOHN SEARLE: Well, there
are two questions here. One is, if you do a substitution
of words that I didn't use and the words I did use, can
you get these observed results? And, of course, you can do that. That's a well-known
technique of politicians. But that wasn't the claim. What is the difference between
the computer and the brain? In one sentence, the brain
is a causal mechanism that produces consciousness
by a certain rather complex and still imperfectly understood
neurobiological processes. But those are quite specific
to a certain electrochemistry. We just don't know the details. But we don't know if you mess
around in the synaptic cleft, you're going to
get weird effects. How does cocaine work? Well, it isn't because it's
got a peculiar computational capacity. Because it messes
with the capacity of the postsynaptic receptors
to reabsorb quite specific neurotransmitters,
norepinephrine-- what are the other two? God, I'm flunking the exam here. Dopamine. Gaba is the third. Anyway, the brain, like the
stomach or any other organ, is a specific causal mechanism. And it functions on specific
biochemical principles. The problem of
the computer is it has nothing to do
with the specifics of the implementation. Any implementation
will do provided it's sufficient to carry out
the steps in the program. Programs are purely
formal or syntactical. The brain is not. The brain is a specific
biological organ that operates on
specific principles. And to create a
conscious machine, we've got to know how to
duplicate the causal powers of those principles. Now, the computer
doesn't in that way work as a causal mechanism
producing higher level features. Rather, computation names an
abstract mathematical process that we have found ways to
implement in specific hardware. But the hardware is not
essential to the computation. Any system that can
carry out the computation will be equivalent. Now, the second
question is about how do you know about consciousness. Well, think about real life. How do I know my dog
Tarski is conscious and this thing here, my
smartphone, is not conscious? I don't have any doubts
about either one. I can tell that Tarski
is conscious not on behavioristic grounds. People say, well, it's because
he behaves like a human being. He doesn't. See, human beings
I know when they see me don't rush up and lick
my hands and wag their tails. They just don't. My friends don't do that. But Tarski does. I can see that
Tarski is conscious because he's got
a machinery that's relatively similar to my own. Those are his eyes. These are his ears. This is his skin. He has mechanisms that mediate
the input stimuli to the output behavior that are relatively
similar to human mechanisms. This is why I'm completely
confident that Tarski's conscious. I don't know anything
about fleas and termites. You know, your typical
termite's got 100,000 neurons. Is that enough? Well, I lose 100,000
on a big weekend. So I don't know if that's
enough for consciousness. But that's a factual question. I'll leave that to the experts. But as far as human
beings are concerned there isn't any question
that everybody in this room is conscious. I mean, maybe that guy over
there is falling asleep, but there's no question about
what the general-- it's not even a theory that I hold. It's a background
presupposition. The way I assume that
the floor is solid, I simply take it for granted
that everybody's conscious. If forced to
justify it, I could. Now, there's always a
problem about the details of other minds. Of course, I know
you're conscious. But are you suffering the
angst of post-industrial man under late capitalism? Well, I have a lot of
friends who claim they do. And they think I'm
philistine because I don't. But that's tougher. We'd have to have a
conversation about that. But for consciousness,
it's not a real problem in a real-life case. AUDIENCE: So you've
said that we haven't begun to understand how
brains work or build comparable machines. But imagine in the future we do. So we can run a simulation,
as you put it, of a brain. And then we interface
it with reality through motor output,
sensory input. What's the difference
between that and a brain, which
you say you know is producing consciousness? In JOHN SEARLE: In some cases,
there's no difference at all. And the difference
doesn't matter. If you've got a
machine-- I hope you guys are, in fact, building
it because the newspapers say you are. If you've got a
program that'll drive my car without a conscious
driver, that's great. I think that's wonderful. The question is not, what
can the technology do? My daddy was an electrical
engineer for AT&T. And his biggest
disappointment was I decided to be a
philosopher, for God's sake, instead of going to Bell
Labs and MIT as he had hoped. So I have no problem with the
success of the technology. The question is,
what does it mean? Of course, if
you've got a machine that can drive a
car as well as I, or probably better than
I can, then so much the better for the machinery. The question is, what is the
philosophical psychological scientific significance of that? And if you think, well,
that means you've created consciousness, you have not. You have to have more
to create consciousness. And for a whole lot of
things, consciousness matters desperately. In this case of this
book that I reviewed, where the guy said,
well, they got machines that are going to rise
up and overthrow us all, it's not a serious possibility
because the machines have no consciousness. They have no conscious
psychological state. It's about like saying the shoes
might get up out of the closet and walk all over us. After all, we've been walking
on them for centuries, why don't they strike back? It is not a real-life worry. Yeah? AUDIENCE: The difference that
I'm interested in-- sorry, the similarity I'm interested
in is not necessarily the output or the
outcome of the system, but rather, that is, it has
the internal causal similarity to the brain that you mentioned. JOHN SEARLE: Yeah, that's
a factual question. The question is, to what
extent are the processes that go on in the computer
isomorphic to processes that go on in the brain? As far as we know,
not very much. I mean, the
chess-playing programs were a good example of this. In the early days
of AI, they tried to interview great chess players
and find out what their thought processes were and
get them to try to duplicate that on computers. Well, we now know
how Deep Blue worked. Deep Blue can calculate
250 million chess positions in one second. See, chess is a trivial game
from a games theoretical point of view because you have
perfect information. And you have a finite
number of possibilities. So there are x number
of possibilities of responding to a
move and x number of possibilities for that move. It's interesting to us because
of the exponential problem. And it's very hard to
program computers that can go very many steps in
the exponents, but IBM did. It's of no
psychological interest. And to their credit,
the people in AI did not claim it as a great
victory for-- at least the ones I know didn't claim
it as a victory for AI because they could
see it had nothing to do with human cognition. So my guess is it's an
interesting philosophical question-- or psychological
question-- to what extent the actual processes
in the brain mirror a computational
simulation. And, of course, to
some respect, they do. That's why computational
simulations are interesting in
all sorts of fields and not just in
psychology, because you can simulate all sorts of
processes that are going on. But that's not strong AI. Strong AI says the simulation
isn't just a simulation. It's a duplication. And that we can refute. AUDIENCE: Could you prove to
me that you understand English? JOHN SEARLE: Yeah,
I wouldn't bother. (SPEAKING WITH BRITISH
ACCENT) When I was in Oxford, many people doubted that I did. I happened to be in a rather
snobbish college called Christ Church. And, of course, I
don't speak English. I never pretended to. I speak a dialect of American,
which makes many English people shudder at the thought. AUDIENCE: So you've said
you understand English, but how do I know you're
not just a computer program? JOHN SEARLE: Well, it's
the same question as Ray's. And the answer is
all sorts of ways. You know, if it got to a
crunch, you might ask me. Now I might give a
dishonest answer. Or I might give
an honest answer. But there's one route
that you don't want to go. And that's the epistemic route. The epistemic route
says, well, you have as much evidence that
the computer is conscious as that we have that
you are conscious. No, not really. I mean, I could go
into some detail about what it is about people's
physical structure that make them capable of
producing consciousness. You don't have to
have a fancy theory. I don't need a fancy
theory of neurobiology to say those are your eyes. You spoke through your mouth. The question was an expression
of a conscious intention to ask a question. Believe me, if you are a
locally produced machine, Google is further
along than I thought. But clearly, you're not. JOHN BRACAGLIA: We're going to
take a question from the Dory. JOHN SEARLE: Is he next? JOHN BRACAGLIA: We
had some people-- AUDIENCE: Almost. JOHN BRACAGLIA:
We had some people submit questions ahead of time. JOHN SEARLE: OK. JOHN BRACAGLIA: So we're
going to read those as well. JOHN SEARLE: OK. All right. Right. JOHN BRACAGLIA: So the
first question from the Dory is, what is the definition of
consciousness you've been using for the duration of this talk? JOHN SEARLE: OK. Here goes. JOHN BRACAGLIA: Please be
as specific as possible. JOHN SEARLE: It is typically
said that consciousness is hard to define. I think it's rather
easy to define. We don't have a
scientific definition because we don't have
a scientific theory. The commonsense
definition of any term will identify the target
of the investigation. Water is a clear,
colorless, tasteless liquid. And it comes in
bottles like this. That's the commonsense
definition. You do science and
you discover it's H2O. Well, with
consciousness, we're in the clear, colorless,
liquid, tasteless sense. But here it is. Consciousness consists
of all those states of feeling or
sentience or awareness that begin in the
morning when you awake from a dreamless sleep. And they go on all day
long until you fall asleep again or otherwise become, as
they would say, unconscious. On this definition, dreams
are a form of consciousness. The secret, the essence,
of consciousness is that for any
conscious state, there's something it feels like to
be in that conscious state. Now, for that reason,
consciousness always has a subjective ontology. Remember, I gave you that
subjective-objective bit. It always has a
subjective ontology. That's the working
definition of consciousness. And that's the one
that's actually used by neurobiological
investigators trying to figure out how the brain does it. That's what you're
trying to figure out. How does the brain produce that? How does it exist in the brain? How does it function? AUDIENCE: I'd like
to propose a stronger bound on your observation
that we do not know how to build a
thinking machine today. Even if we knew how to
build it, because, I mean, our thinking machine was built
by the process of evolution, I'd like to propose--
well, what do you think about stating
that, actually, we may not have the time? And that it actually
may not matter. The reason we may not have the
time is the probabilities that need to happen, like the
asteroid falling and wiping the dinosaurs and whatnot,
may not happen in the universe that we live been. But if you subscribe to the
parallel universes theory, then there is some artificial
consciousness somewhere else. JOHN SEARLE: Yeah. OK, about we may not have the
time, well, I'm in a hurry. But I think we ought to
try as hard as we can. It's true. Maybe some things are
beyond our capacity to solve in the life of
human beings on Earth. But let's get busy and try. There was a period when
people said, well, we'll never really understand life. And while we don't
fully understand it, but we're pretty far along. I mean, the old debate
between the mechanists and the vitalists, that doesn't
make any sense to us anymore. So we made a lot of progress. There was another
half to your question. AUDIENCE: It may not matter
because all universe-- JOHN SEARLE: Oh, yeah. Maybe conscious doesn't matter. Well, it's where I live. It matters to me. AUDIENCE:
Philosophically speaking. JOHN SEARLE: Yeah,
but the point is there are a lot of things
that may or may not matter which are desperately
important to us-- democracy and sex and
literature and good food and all that kind of stuff. Maybe it doesn't
matter to somebody, but all those things matter
to me in varying degrees. AUDIENCE: Your artificial heart
analogy that you mentioned. I think you included the
idea that it's possible, just like with the
artificial heart, that we use different materials
and different approaches to simulate a heart
and, in some ways, go beyond just-- come
closer to duplication, that we might, in theory,
be able to do the same thing with an artificial brain. I'm wondering if you
think it's possible that going down the
path just trying to do a simulation of a
brain accidentally creates a consciousness or accidentally
creates duplication, even if we don't intend to
do it with exact same means as a brain is made. JOHN SEARLE: I would say
to believe in that, you have to believe in miracles. You have to-- now
think about it. We can do computer simulations
of just about anything you can describe precisely. You do a computer
simulation of digestion. And you could get
a computer model that does a perfect
model of digesting pizza. For all I know, maybe somebody
in this building has done it. But once you've done that, you
don't rush out and buy a pizza and stuff it in the
computer because it isn't going to digest a pizza. What it gives you is
a picture or a model or a mathematical diagram. And I have no objection to that. But if my life depended
on figuring out how the brain produces
consciousness, I would use the
computer the way you use a computer in any
branch of biology. It's very useful
for figuring out the implications of your
axioms, for figuring out the possible experiments
that you could design. But somehow or
other that the idea that the computer simulation
of cognitive behavior might provide the key
to the biochemistry, well, it's not out
of the question, it's just not plausible. JOHN BRACAGLIA: Humans are
easily fooled and frequently overestimate the
intelligence of machines. Can you propose a better
test of general intelligence than the Turing test, one
that is less likely to relate false positives? JOHN SEARLE: Well,
you all know my answer to that is the first
step is to distinguish between genuine intrinsic
observer-independent intelligence and
observer-relative intelligence. And observer-relative
intelligence is always in the
eye of the beholder. And anything will
have the intelligence that you're able
to attribute to it. I just attributed a great deal
of intelligence to this object because it can
compute a function, s equals one-half squared. Now this object has
prodigious intelligence because it discriminates
one hair from-- I won't demonstrate
it, but in any-- take my word for it that it
does, even in a head that's sparse with hair. So because intelligence
is observer-relative, you have to tell me the
criteria by which we're going to judge it. And the problem with
the Turing test-- well, it's got all
sorts of problems, but the basic problem is that
both the input and the output are what they are only
relative to our interpretation. You have to interpret
this as a question. And you have to interpret
that as an answer. One bottom line of
my whole discussion today is that the
Turing test fails. It doesn't give you a
test of intelligence. AUDIENCE: So you seem to take
it as an article of faith that we are conscious,
that your dog is conscious, and that that
consciousness comes from biological material, the
likes of which we can't really understand. But forgive me for
saying this, that makes you sound like an
intelligent design theorist who says that because
evolution and everything in this creative
universe that exists is so complex, that
it couldn't have evolved from inert material. So somewhere between
an amoeba and your dog, there must not be consciousness. And I'm not sure where
you would draw that line. And so if consciousness
in human beings is emergent, or even in
your dog at some point in the evolutionary
scale, why couldn't it emerge from a
computation system that's sufficiently distributed,
networked, and has the ability to perform many calculations
and maybe is even hooked into biologic systems? JOHN SEARLE: Well, about could
it emerge, miracles are always possible. How do you know
that you don't have chemical processes
that will turn this into a conscious comb? How do I know that? Well, it's not a
serious possibility. I mean, the mechanisms
by which consciousness is created in the brain
are quite specific. And remember, this
is the key point. Any system that
creates consciousness has to duplicate
those causal powers. That's like saying, you don't
have to have feathers in order to have a flying machine,
but you have to duplicate and not merely simulate the
causal power of the bird to overcome the force of gravity
in the Earth's atmosphere. And that's what airplanes do. They duplicate causal powers. They use the same principle,
Bernoulli's principle, to overcome the
force of gravity. But the idea that somehow
or other you might do it just by doing a simulation
of certain formal structures of input-output mechanisms,
of input-output functions, well, miracles are
always possible. But it doesn't seem likely. That's not the way
evolution works. AUDIENCE: But machines
can improve themselves. And you're making the case
for why an amoeba could never develop into your dog over
a sufficiently long period of time and have consciousness. JOHN SEARLE: No, I
didn't make that case. No, I didn't make that case. [INTERPOSING VOICES] JOHN SEARLE: Amoeba
don't have it. AUDIENCE: You're refuting
that consciousness could emerge from a sufficiently
complex computation system. JOHN SEARLE: Complexity is
always observer-relative. If you talk about
complexity, you have to talk about the metric. What is the metric by which
you calculate complexity? I think complexity is
probably irrelevant. It might turn out that
the mechanism is simple. There's nothing
in my account that says a computer could
never become conscious. Of course, we're all conscious
computers, as I said. And the point about
the amoeba is not that amoebas can't evolve into
much more complex organisms. Maybe that's what happened. But the amoeba as it stands--
a single-celled organism-- that doesn't have enough
machinery to duplicate the causal powers of the brain. I am not doing a science
fiction project to say, well, there can never be an
artificially created consciousness by people busy
designing computer programs. Of course, I'm not saying
that's logically impossible. I'm just saying it's not
an intelligent project. If you're thinking
about your life depends on building a machine that
creates consciousness, you don't sit down your console
and start programming things in some programming language. It's the wrong way
to go about it. AUDIENCE: If we gave you a
disassembly of Google Translate and had you implement the
Chinese room experiment, either it would
take you thousands of years to run all the assembly
instructions on pen and paper, or else you'd end up
decompiling it into English and heavily optimizing
it in that form. And in the process,
you'd come to learn a lot about the relationships
between the different variables and subroutines. So who's to say that an
understanding of Chinese wouldn't emerge from that? JOHN SEARLE: Well, OK, I
love this kind of question. All right. Now, let me say, of course,
when I did the original thought experiment, anybody will point
out to you if you actually were carrying out the
steps in a program for answering
questions in Chinese, well, we'd be around for
several million years. OK, I take their word for it. I'm not a programmer,
but I assume it would take an
enormous amount of time. But the point of the
argument is not the example. The example is
designed to illustrate the point of the argument. The point of the
argument can be given in the following derivation. Programs are formal
or syntactical. That's axiom number one. That's all there
is to the program. To put it slightly
more pretentiously, the notion same
implemented program defines an equivalence
class specified entirely formally or syntactically. But minds have a
semantics, and-- and this is the whole point of the
example-- the syntax by itself is not sufficient
for the semantics. That's the point of the example. The Chinese room is designed
to illustrate axiom three, that just having the steps in
the program is not by itself sufficient for a semantics. And minds have a semantics. Now, it follows from those
that if the computer is defined in terms of its
program operations, syntactical operations,
then the program operations, the computer operations
by themselves are never sufficient
for understanding because they lack a semantics. But, of course, I'm
not saying, well, you could not build a machine
that was both a computer and had semantics. We are such machines. AUDIENCE: You couldn't
verify experimentally what the difference might
be between semantics and what would
emerge from thousands of years of experience with
a given syntactical program. JOHN SEARLE: I think you
can-- I don't inherit this. He does. I think you don't want to
go the epistemic route. You don't want to
say, well, look you can't tell the difference
between the thinking machine and the non-thinking machine. The reason that's
the wrong route to go is we now have
overwhelming evidence of what sorts of
mechanisms produce what sorts of cognition. When I first got
interested in the brain, I went out and bought
all the textbooks. By the way, if you want
to learn a subject, that's the way to do it. Go buy all the
freshman textbooks because they're
easy to understand. One of these textbooks, it
said cats have different color vision from ours. Their visual experiences
are different from ours. And I thought, christ,
have these guys been cats? Have the other
cats mind problem? Do they know what
it's like to be a cat? And the answer is, of
course, they know completely what's the cat's color
vision is because they can look at the color receptors. And cats do have different
color vision from ours because they have
different color receptors. I forget the difference. You can look them
up in any textbook. But if in real life
we're completely confident that my dog can hear
parts of the auditory spectrum that I can't hear. He can hear the higher
frequencies that I can't hear. And cats have a different
color vision from mine because we can see
what the apparatus is. We got another question? You're on. JOHN BRACAGLIA: This will
be our final question. JOHN SEARLE: OK. I'm prepared to
go all afternoon. I love this kind of crap. AUDIENCE: So at the
beginning of your talk, you mentioned an anecdote
about neuroscientists not being interested
in consciousness. And, of course, by this time,
a number of neuroscientists have studied it. And so they'll
present stimuli that are near the threshold
of perceptibility and measure the brain responses
when it's above or below. What do you think about that? Is that on the right track? What would you do differently? JOHN SEARLE: No, I think one
of the best things that's happened in my lifetime--
it's getting a rather long lifetime-- is
that there is now a thriving industry of
neuroscientific investigations of consciousness. That's how we will
get the answer. When I first got
interested in this, I told you I went over to UCSF
and told those guys get busy. The last thing
they wanted to hear was being nagged by some
philosopher, I can tell you. But one guy said to me--
famous neuroscientist said-- in my discipline,
it's OK to be interested in consciousness,
but get tenure first. Get tenure first. Now, there has been a change. I don't take credit
for the change, but I've certainly
been urging it. You can now get tenure by
working on consciousness. Now, neuroscience has
changed, that now there's a thriving industry
in neuroscience of people who are actually
trying to figure out how the brain does it. And when they figure that out--
and I don't see any obstacle to figuring that
out-- it will be an enormous intellectual
breakthrough, when we figure out how
exactly does the brain create consciousness. AUDIENCE: But in
particular, that approach they're using now-- I
use the example of presenting stimuli that are near the
threshold of perceptibility and looking for
neural correlates, do you think that's
going to be fruitful? What particular questions
would you ask to find out? JOHN SEARLE: I happened to
be interested in this crap. And if you're
interested in my views, I published an article in the
"Annual Review of Neuroscience" with a title "Consciousness." It's easy to remember. You can find it on the web. And what I said is,
there are two main lines of research going on today. There are guys who take what
I call the building block approach. And they try to find
the neuronal correlate of particular experiences. You see a red object. Or you hear the
sound of middle C. What's the correlate
in the brain? And the idea they have
is if you can figure out how the brain creates
the experience of red, you've cracked
the whole problem. Because it's like DNA. You don't have to figure
out how every phenotype is caused by DNA. If you get the general
principles, that's enough. Now, the problem is they're
not making much progress on this what I call the
building block approach. It seems to me a much
more fruitful approach is likely to be think of
consciousness as coming in a unified field. Think of perception not
as creating consciousness, but as modifying
the conscious field. So when I see the red
in this guy's shirt, it modifies my conscience field. I now have an experience of
red I didn't have before. Most people-- and the
history of science supports them-- use the
building block approach because most of the
history of science has proceeded atomistically. You figure out how
little things work, and then you go to big things. They're not making much
progress with consciousness. And I think the reason
is you need to figure out how the brain creates
the conscious field in the first place because
particular experiences, like the perception of red
or the sound of middle C, those modify that
conscious field. They don't create a
conscious field from nothing. They modify an existing
conscious field. Now, it's much harder
to do that because you have to figure out how
large chunks of the brain create consciousness. And we don't know that. The problem is in an MRI, that
conscious brain looks a lot like the unconscious brain. And there must be some
differences there. But at this point-- and I
haven't been working on it. I've been working
on other things. But I want somebody to
tell me exactly what's the difference between
the conscious brain and the unconscious brain that
accounts for consciousness. We're not there yet. However, what I'm doing here
is neurobiological speculation. I mean, I'm going
to be answered not by a philosophical
argument, but by somebody who does the hard research of
figuring out exactly what are the mechanisms in the brain
the produce consciousness and exactly how do they work. JOHN BRACAGLIA: John, it's
been an immense, immense honor to be here with you today. Thank you so much for your time. And thank you for
talking to Google. JOHN SEARLE: Well,
thank you for having me. [APPLAUSE]
I think the fact that the Chinese Room Argument is one of those things where both sides find their own positions so obvious that they can't believe the other side is actually making the claim they are making (we seen Searle's disbelief here, to see the other side see this quora answer by Scott Aaronson) and the fact that both sides seem to be believed by reasonable people simply means that there are deeper conceptual issues that need to be addressed - an explanatory gap for the explanatory gap, as it were.
The problem Searle has is that he's making the argument that we don't know enough to say what mechanisms produce consciousness. But then he directly contradicts himself by making the claim that a computer can not produce consciousness. He can't have it both ways.
He's also constantly begging the question.
Finally, he deflect most of the serious criticisms presented by making jokes.
Not very impressive.
Love the guy, and this did provoke some interesting ideas.
Although I must say, I found it incredibly frustrating how he responded to the questions at the end. As far as I noticed, he never directly answered all/most of the questions. Perhaps it was because he disagreed with the argument but, to me, it appeared as if that he would not acknowledge the question, then proceed to tell a story about a time someone wrote a text book on the subject, which in fact completely disregarded the question and the story would be unrelated to the answer the audience wanted.
Maybe John Searle is a Syntaxical program and has no level of intelligence of semantics. Or maybe it's me? Either way, enjoyable argument.
Edit: Added a word. Removed a word.
Searle lays out his view on consciousness and computation.
In the talk he recounts the origin of the chinese room thought experiment which I haven't heard elsewhere.
Interestingly while discussing the chinese room, he uses the question "what is the longest river in china?" as an example, which you can try to ask google (by voice) and expect an appropriate answer - a working chinese room.
In the crowd, listening, is ray kurzweil who also asks the first question in the Q&A session.
Ahh. Talking about the Chinese Room is the philosophical equivalent of debating politics. In the end it just makes both of the parties angry
I read everybody's comments...sort of surprised and bemused that Searle continues to have sympathizers. While I can't speak for the plurality of philosophers-of-mind, it is has always been my sense that he's in a shrinking coalition - with Chalmers, Dreyfus, Chomsky, Nagel (et al) - Dennett calls them "the new mysterians" - that have elaborate arguments against Strong AI which convince very few. What they are best known for is causing a giant response literature from philosophers that think the arguments are interesting, but specious.
Someone below suggested that intentionality is a cryptic notion. It isn't. It's easy-peasy, and obvious. Imagine a mercury thermometer, that you put under your tongue to take temperature. It has a specific shape, it has little lines and numbers on it, and the column of mercury inside behaves according to physical principles that the manufacturer understands. You don't have to know chemistry to use it or read it. The height of the column of mercury "behaves" rationally. The intentionality - the "aboutness" of the thermometer, is that it represents, literally stands-in-for, the meaning of your body temperature. It doesn't replace it, it doesn't emulate it, it represents it, rationally. It seems obvious to say the thermometer isn't conscious of temperature, it's just causally covariant to it. So then, why is the thermometer so smart? Because all the relevant knowledge is in the design of the thing.
Searle speaks of "original intentionality", which is something that only humans can have, because we're the tool makers. We imbue our things with their representational potential, so the derivatives never can have what we have. But this argument falls flat. We don't have a description of ourselves thorough enough to be convinced that we are conscious, or that there is anything "original" or "special" about our experience. It is unique to our species that we talk and use symbolic communication and have histories, a life cycle of starting out relatively non-rational and then learning to become "conscious-of" XYZ.
But for the same reason it is intuitive to say that animals and babies must have primordial consciousness if adult humans do, one can argue that nothing has consciousness, in the special, mysterious sense that troubles Searle, or that everything has consciousness. Panpsychists hold that consciousness HAS TO BE a property of matter.
For me, Dennett is the cure-all for these speculations. If you are sufficiently hard-nosed about the facts of neurology and cognition to the limit of present-day-science: there are no strong reasons to insist that the Chinese Room doesn't understand Chinese. All you have to do is keep upgrading the parameters of the black box to respond to the various challenges. It's always operated by a finite rule book (see Wittgenstein on language games, and Chomsky on "discrete infinity" - you don't need a lookup table the size of the cosmos) by otherwise non-Chinese-understanding automatons. Point being, you can remodel the CR to satisfy a complaint about it, but the insistence by surly Searle is that changing the parameters doesn't help. So it's a philosophical impasse related to Searle's intransigence and disinterest in the alternative Weltanschuauung.
Wow, such a level of respect for people who present the systems argument. He even admits that he cannot himself understand how syntax can be powerful enough to process semantics, much less be semantics.
Because he isn't able to conceive how this could be done, it must therefore - according to Searle - be impossible for every other human on the face of Earth to understand.
Has Searle, at any point in his career, named an epistemically observable process besides consciousness that is not Turing computable?
The guy at 58:40 sort of hints at this, though from the other direction.
What many seem to miss is that the person in the chinese room is irrelevant. He simply follows the instructions, making no free will choices of his own. The impression of intelligence for the observer comes from these instructions, which are assumedly complex enough to model memory, emotions, language ability, individuality, etc. Therefore the only entity about whose intelligence we can argue is whoever made these instructions, and that entity is outside Searle's thought experiment.
It's like claiming that a phone is or isn't conscious when it translates someone's intelligent responses to your questions.
https://kaiteorn.wordpress.com/2016/02/21/chinese-room-what-does-it-disprove/
I really like David Thornley's response:
To see how flawed Searle's argument is is, think of this: Let's substitute the the game of Chess for Turing's imitation game. Do you think you could make a room full of books of moves and rules that a person inside the room could use to play the game of chess? No, you couldn't (At least not one that could win against a decent human player). Chess is too complex a problem to solve that way, as computer programmers have known for decades. There are too many possible moves to store every game state in memory (or volumes of books). That's why search and statistics (from previous games) are needed, as well as an understanding of the game of Chess. My point is, the only way to make a computer program that can beat a human chess player is to have tons of data, and an understanding of the game built into the program.
The imitation game is just as complex as chess, if not more so. Searle's fallacy is that he simplifies the problem and then uses a simple solution to prove something (that the humans in the room don't understand chinese), then uses that argument to conclude that AI would never understand the conversations it was having even if the AI could win the imitation game.
Ask yourself though, is it safe to say that a chess program that can beat the best human player doesn't understand chess? I think it good chess programs do understand the game, and I think that's the only way to solve the chess problem, and I think this proves Searle wrong.
At the very least this shows that Searle's answer to the systems reply (where he claims a single person could memorize all the possible responses to chinese questions without understanding the language) is flawed.