The following content is
provided under a Creative Commons license. Your support will help
MIT OpenCourseWare continue to offer high quality
educational resources for free. To make a donation or to
view additional materials from hundreds of
MIT courses, visit mitopencourseware@ocw.mit.edu. PROFESSOR: So really,
what my main concern has been for quite
a few years is to make some theory of what
makes people able to solve so many kinds of problems. I guess, if you ran through the
spectrum of all the animals, you'd find lots of problems
that some animals can solve and people can't, like how many
of you could build a beaver dam and/or termite nest. So there are all sorts
of things that evolution manages to produce. But maybe the most
impressive one is what the human
infant can do just by hanging around
for 10, or 20, or 30 years and watching what
other humans can do. So we can solve all
sorts of problems, and my quarrel with most of
the artificial intelligence community has been that the
great success of science in the last 500 years
really has been in physics. And it's been rewarded by
finding little sets of rules, like Newton's three
laws, and Maxwell's four laws, and Einstein's
one law or two, that explained a huge range
of everyday phenomena. Of course, in the
1920s and '30s, that apple cart got upset. Actually, Einstein
himself, who had discovered the first quantum
phenomena, namely the quantization of
photons, had produced various scientific
laboratory observations that were inexplicable in
terms of either Maxwell, or Newton, or Einstein's
earlier formulations. So my picture of the history
is that in the 19th century and a little bit earlier going
back to Lock, and Spinoza, and Hume, and a few of
those philosophers, even Immanuel Kant, they
had some pretty good psychological ideas. And as I mentioned
the other day, I suspect that
Aristotle was more like a modern
cognitive psychologist and had even better ideas. But we've probably
lost a lot of them, because there are
no tape recorders. Who knows what Aristotle and
Plato said that their students didn't write down? Because it sounded silly. The idea that we developed
around here, mostly, Seymour Papert, and
a lot of students-- Pat Winston was one of the
great stars of that period. --was the idea that
to get anything like human
intellectual abilities, you're going to have to
have all sorts of high level representations. So one has to say, the old
conditioned reflex of stimulus producing a response
isn't good enough. The stimulus has
to be represented by some kind of
semantic structure somewhere in the brain or mind. So far as I know, it's
only in the theories of not even modern
artificial intelligence, but the AI of the '60s,
and '70s, and '80s, that people thought
about what could be the internal representation
of the kinds of things that we think about. And even more important, if
one of those representations, you see something, or you
remember some incident. And your brain represents
it in some way. And if that way doesn't
work, you take a breath. And you sort of stumble
around and find another way to represent it. Maybe when the original
event first happened, you represented it in
three or four ways. So we're beginning to see-- did anybody hear Ferucci's talk? The Watson guy was up
here a couple of days ago. I missed it, but they haven't
made a technical publication as far as I know of how
this Watson program works. But it sounds like
it's something of a interesting society
of mind like structure, and it'd be nice if they would-- has anybody read any
long paper on it? There have been a
lot of press reports. Have you seen anything, Pat? Anyway, they seem to
have done some sorts of commonsense reasoning. As I said the other day, I doubt
that Watson could understand why you can pull something with
a string, but you can't push. Actually, I don't know
if any existing program can understand that yet. I saw some amazing
demonstrations Monday by Steve Wolfram of his
Wolfram Alpha, which doesn't do much common sense reasoning. But what it does do is,
if you put in a sentence, it finds five or 10
different representations, anything you can find
that's sort of mathematical. So when you ask a question,
it gives you 10 answers, and it's much better
than previous systems. Because it doesn't-- well,
Google gives you a quarter million answers. But that's too many. Anyway, I'm just going to
talk a little bit more, and everybody should be
trying to think of a question that the rest of the
class might answer. So there are lots of different
kinds of problems that people can solve going back
to the first one, like which moving object
out there is my mother and which might be
a potential threat. So there are a lot of kinds
of problems that we solve, and I've never
seen any discussion in psychology books of what
are the principal activities of common sense thinking. Somehow, they don't have-- or people don't--
before computers, there really wasn't any way
to think about high level thinking. Because there weren't any
technically usable ways to describe
complicated processes. The idea of a
conditional expression was barely on the
threshold of psychology, so what kinds of
problems do we have? And if you take some
particular problem, like I find these days, I
can't get the top off bottles. So how do I solve that? And there are lots of answers. One is you look for somebody
who looks really strong. Or you reach into your
pocket, and you probably have one of these and so on. There must be some way to put
it on the floor, and step on it, and kick it with the other foot. So there are lots of problems
that we're facing every day. And if you look in traditional
cognitive psychology-- well, what's the worst theory? The worst and the best theory
got popular in the 1980s, and it was called
rule based systems. And you just have a big
library, which says, if you have a soda bottle and
you can't get the cap off, then do this, or
that, or the other. So some people decided, well,
that's really all you need. Rod Brooks in the
1980s sort of said, we don't need those
fancy theories that people, like
Minsky, and Papert, and Winston are working on. Why not just say for each
situation in the outer world have a rules that says how
to deal with that situation? Let's make a hierarchy
of them, and he described a system that sort
of looked like the priority interrupt system in a computer. And he won all sorts of prizes
for this really bad idea that spread around
the world, but it solved a lot of problems. There are things about
priority interrupt that aren't obvious,
like suppose you have-- in the first computers,
there was some problem. Because what should
you do, if there's several signals coming
into the computer, and you want to respond to them? And some of the signals are
very fast and very short. Then you might think, well,
I should give the highest priority to the signal
that's going to be there the shortest time or
something like that. The funny part is that when
you made such a system, the result was that, if
you had a computer that was responding to some signal
that's coming in at a-- I'm talking about the days
when computers were only working at a few kilohertz, few
thousand operations a second. God, that's slow, a
million times shorter than what you have
in your pocket. And if you give priority
to the signals that have to be reacted
to very fast, then what happens if you
type to those computers? It would never see them,
because it's always-- I saw this happening once. And finally, somebody
realized that you should give the highest
priority to the inputs that come in least frequently,
because there's always-- otherwise, if there's something
coming in very frequently, you'll just always
be responding to it. Any of you run into this? It took me a while
to figure out why. Anyway, there are lots
of kinds of problems. And the other day,
I was complaining that we didn't have
enough ways to do this. We had hundreds of
words for emotions, and here's a couple of dozen. They're in chapter seven and
eight actually most of these. So here's a bunch of words
for describing ways to think, but they're not very technical. So you can talk about
remorse, and sorrow, and blah, blah, blah. Hundreds and hundreds of
words for feelings, and it's a lot of effort to find a dozen
words for intellectual, for-- what should I call them?
--problem solving processes. So it's curious to me
that the great field called cognitive psychology has
not focused in that direction. Anyway, here's about
20 or 30 of them. And you'll find them scattered
through chapters seven and eight. Here's my favorite
one, and I don't know of any proper name for it. But if you're trying to solve
a problem, and you're stuck, and the example that
comes to my mind is, if I'm trying to
remember someone's name, I can tell when it's hopeless. And the reason is that
for somehow or other, I know that there's a
huge tree of choices. That's one way to
represent what's going on, and I might know that-- I'm sure that name
has a Z in it. So you search around and
try everything you can. But of course, it
doesn't have a Z, so the way to solve that
problem is to give up. And then a couple of minutes
later, the name occurs to you. And you have no idea how
it happened and so forth. Anyway, the long story
is that Papert, and I, and lots of really great
students in the '60s and '70s spent a lot of time making
little bottles of problem solvers that didn't work. And we discovered that
you needed something else, and we had put that in. Other people would come
and say, that's hopeless. You're putting in more
things than you need. And my conclusion is that, wow,
it's the opposite of physics. In physics, you're
always trying to find-- what is it called? --Occam's razor. Never have more structure
than you need, because what? Well, it'll waste your time,
but my feeling was, never have less than you'll need. But you don't know
how many you'll need. So what I did, I
had four of these, and then I forced myself
to put in two more. And people ask, what's the
difference between self models and self-conscious processes? And I don't care. Well, what's the
difference between self-conscious and reflective? I don't care. And the reason is
that, wow, it's nice to have a box
that isn't full yet. So if you find something
that your previous theory-- going back to Brooks,
he was so successful getting simple robots to
work that he concluded that the things didn't need
any internal representations at all. And for some mysterious reason,
the Artificial Intelligence Society gave him their annual
big prize for this very wrong idea, and it caused AI research
to sort of half collapse in places, like Japan. He said, oh, rule based
systems is all we need. Anybody want to defend him? The odd thing is, if
you talk to Brooks, he's one of the best
philosophers you'll ever meet. And he says, oh yes, of
course, that's wrong, but it helps people do
research and get things done. And as, I think, I
mentioned the other day when the 3 Mile
Island thing happened, there was no way to
get into the reactor. That was 1980. And 30 years later when the-- how do you pronounce it? --Fukushima accident
happened, there was no robot that could
go in and open a door. I don't know who
to blame for that. Maybe us. But my picture of the
history is that the places that did research on robotics,
there were quite a few places. And for example,
Carnegie Mellon was very impressive in getting
the Sony dogs to play soccer, and they're still at it. And I think I mentioned that
Sony still has a stock of-- what's it called? AUDIENCE: AIBOs. PROFESSOR: Say it again. AUDIENCE: AIBOs. PROFESSOR: FIBO? AUDIENCE: AIBO, A-I-B-O. PROFESSOR: All right,
AIBOs, but the trouble is they're always broken. There was a robot here
called Cog that Brooks made, and it sometimes worked. But usually, it wasn't
working, so only one student at that time could
experiment with the robot. What was that wonderful project
of trying to make a walking machine for four years in-- there was a project
to make a robot walk. And there was only one of it,
so first, only one student at a time can do research on it. And most of the time,
something's broken, and you're fixing it. So you end up that you sort
of get five or 10 hours a week on your laboratory
physical robot. At the same time,
Ed Friedkin had a student who tried to
make a walking robot, and it was a stick
figure on the screen. I forgot the student's name. But anyway, he simulated
gravity and a few other things. And in a couple of weeks,
he had a pretty good robot that could walk, and go
around turns, and bank. And if you simulated
an oily floor, it could slip and fall, which
we considered the high point of the demo actually. So there we find-- anyway, I've sort of
asked you to read my two books for this course. But those are not
the only good texts about artificial intelligence. And if you want to dig deeper,
it might be a good idea to go to the web and type in
Aaron Sloman, S-L-O-M-A-N. And you'll get to his website,
which is something like that. And Sloman is a sort of
philosopher who can program. There are a handful
of them in the world, and he has lots of
interesting ideas that nobody's
gotten to carry out. So I recommend. Who else is-- Pat, do you ever
recommend anyone else? PAT: No. PROFESSOR: What? I'm trying to think. I mean, if you're
looking for philosophers, Dan Dennett has a lot of ideas. But Sloman is the
only person, I'd say, is a sort of real
professional philosopher, who tries to program, at
least, some of his ideas. And he has successful
students, who have made larger systems work. So if you get tired of
me, and you ought to, then go look at this guy,
and see who he recommends. OK, who has a good
question to ask? AUDIENCE: So Marty, I'm
talking about how we have a lot of words for emotions. Why can we only have
one word for cause? PROFESSOR: It's a
mystery, but I spent most of the couple of days
making this list bigger. But these aren't-- you know,
these are things that you do when you're thinking. You make analogies. If you have multiple
goals, you try to pick the most important one. Or in some cases, if
you have several goals, maybe you should try to
achieve the easiest one, and there's a chance that
it will lead you into what to do about the harder ones. But a lot of people
think mostly in England that logic is a good
way to do reasoning, and that's completely wrong. Because in logic, first of all,
you can't do analogies at all, except at a very high level. It takes four or five
nested quantifiers to say, A is to B as C is to which
of the following five. So I've never seen anyone
do analogical thinking using formalogic, first order
or higher order predicate calculus. What's logic good for? Its great after you've
solved a problem. Because then you can
formalize what you did and see if some of the things
you did weren't unnecessary. In other words, after
you've got the solution to a problem, what you've got
by going through a big search, you finally found
a path from A to Z. And now, you can see if
the assumptions that you had to make to bridge all
these various little gaps were all essential or not. Yes? AUDIENCE: What kind
of examples would you say that logic
came to analogies? Like, well, water is [INAUDIBLE]
containment, like why [INAUDIBLE]? PROFESSOR: Well,
because you have to make a list of
hypotheses, and then let me see if I can find Evans. The trouble is-- darn,
Evans name is in a picture. And Word can't look
inside its pictures. Can PowerPoint find words
in its illustrations? Why don't I use PowerPoint? Because I've discovered that
PowerPoint can't read pictures made by other programs in
the Microsoft Word suite. The drawing program in
Word is pretty good, and then there's an
operation in Word, which will make a PowerPoint
out of what you drew. And it's 25 years
since Microsoft hasn't fixed the fatal errors
that it makes when you do that. In other words, I don't think
that the PowerPoint and Word people communicate. And they both make
a lot of money, so that might be that
might be the reason. Where was I? AUDIENCE: Why logic
can't do [INAUDIBLE].. PROFESSOR: Well, you can
do anything in logic, if you try hard
enough, but A is to B as C is to X is a
four part relation. And you'd need a whole
pile of quantifiers, and how would you
know what to do next? Yes? AUDIENCE: Talk a bit about the
situation in which we are able to perform some sort of action,
like really fluently and really well, but we cannot
describe what we're doing. And the example I give is, say,
I'm an expert African drummer from Africa, and I can make
these really complicated rhythms. But if you asked me,
what did you just do? I had no idea how
to describe it. And in that case, do you think
the person is capable of-- I guess, do you
think the person-- we can say that the
person understands this, even though they
cannot explain it. PROFESSOR: Well, if you take
an extreme form of that, you can't explain why you
used any particular word for anything. There's no reason. It's remarkable how well
people can do in everyday life to tell people how
they got an idea. But when you look
at it, it doesn't say how you would program
a machine to do it. So there's something very
peculiar about the idea that-- it goes back to this idea
that people have free will and so forth. Suppose, I say, look
at this and say, this has a constriction
at this point. Why did I say constriction? How do you get any-- how do you decide what
word to use for something? You have no idea, so it's
a very general question. It's not clear that
the different parts of the frontal lobes,
which might have something to do with making
plans and analyzing certain kinds of situations,
have any access to what happens in the Broca or-- what's the speech
production area? Broca, and I'm trying to find
the name of the other one. It's connected by a cable that's
about a quarter inch thick. AUDIENCE: Is that the Wernicke? PROFESSOR: Wernicke, yeah. We have no idea how those
work as far as I've never seen any publication in
neuroscience that says, here's a theory of what
happens in Wernicke's area. Have any of you ever seen one? What do those people
think about it, what they'll tell you about? I was reading
something, which said, it's going to be very hard
to understand these areas. Because each neuron is connected
to 100,000 little fibers. Well, some of them are. And I bet they don't do
much, except sort of set the bias for some large
collection of other neurons. But if you ask somebody, how
did you think of such a word? They will tell you
some story or anecdote. But they won't be
able to describe some sort of procedure,
which is, say, in terms of a
language, like lisp. And say, I can't this and
that, and I took the clutter of this in the car of that. And I put them in this register,
and then I swapped that with-- You don't see theories
of how the mind works in psychology today. The only parts are
they know a little bit about some aspects of
vision, because you can track the paths of
images from the retina to what's called the
primary visual cortex. And people have been
able to figure out what some of those
cortical columns do. And if you go back to an
animal, like the frog, then researchers, like
[? Bitsey ?] and others, have figured out
how the equivalent of the cerebellum in the frog. They've got almost
the whole circuit of how when the
frog sees a fly, it manages to turn
its head that way, and stick its tongue
out, and catch it. But in the case of
a human, I've never seen any theory of how any
person thinks of anything. There's artificial
intelligence, which has high level theories of
semantic representations. And there's
neuroscience, which has good theories of some
parts of locomotion and some parts of
sensory systems. And to this day, there's
nothing much in between. David, here, has decided to
go from one to the other, and a former student
of mine Bob Hearn has done a little bit on both. And I bet there are 20 or 30
people around the country, who are trying to bridge the gap
between symbolic artificial intelligence and mappings
of the nervous system. But it's very rare,
and I don't know who you could ask to get support
to work on a problem like that for five years. Yeah? AUDIENCE: So presumably
to build a human life for artificial
intelligence, we need to perfectly model our
own intelligence, which means that we are the system. We ourself are the system that
we're trying the understand. PROFESSOR: Well, it
doesn't have to be exactly. I mean, people are different,
and the typical person looks like they have
400 different brain centers doing slightly
different things or very different things. And we have these examples. In many cases, if you
lose a lot of your brain, you're very badly damaged. And in other cases, you recover
and become just about as smart as you were. There's probably a few
cases, where you got rid of something that
was holding you back, but it's hard to prove that. We don't need a theory
of how people work yet, and the nice thing about AI
is that we could eventually get models, which are
pretty good at solving what people call everyday
common sense problems. And probably in many
respects, they're not the way the human mind
works, but it doesn't matter. But once you've got-- if I had a program, which was
pretty good at understanding why you can pull with
a string but not push, then there's a fair chance
you could say, well, that seems to resemble
what people do. I'll do this few
psychological experiments and see what's wrong with that
theory and how to change it. So at some point, there'll
be people making AI systems, comparing them to
particular people, and trying to make them fit. The trouble is nowadays,
it takes a few months, if you get a really good
new idea, to program it. I think there's something wrong
with programming languages, and what we need is a-- we need a programming language,
where the instructions describe goals and then subgoals. And then finally,
you might say, well, let's represent this concept by
a number or a semantic network of some sort. Yes? AUDIENCE: That idea of having
a programming language where you define goals. PROFESSOR: Is there a
goal oriented language? AUDIENCE: So there
is kind of one. If you think about it,
if you squint hard enough at something, like SQL,
where you tell it here, I want to find the top
10 people in my database with this high value. And then you don't worry
about how the system goes about doing that. In a sense, that's redefining
your goal [INAUDIBLE].. But you got to
switch a little bit. PROFESSOR: What's it called? AUDIENCE: SQL. PROFESSOR: SQL. AUDIENCE: [INAUDIBLE] database
and curates it [INAUDIBLE].. PROFESSOR: Oh, right. Yes, I guess database query
languages are on the track, but Wolfram Alpha seems to
be better than I thought. Well, he was running
it, and Steve Wolfram was giving this demo at a
meeting we were at on Monday. And he'd say, well,
maybe I'll just say this, and it always worked. So maybe either the language
is better than I thought, or Wolfram is better than
I thought or something. Remarkable guy. Yes? AUDIENCE: So I liked
this example of you only remember a name after
you've given up consciously trying to think about it. Do you think this is
a matter of us being able to set up
back our processes, and then there's
either some delay. Like we give off-
there's some delay in the process, where we don't
have the ability to correctly terminate processes. Do you think this
only works for memory, or could it work
for other things? Like could I start an
arithmetic operation, and then give up, and then
it'll come to me later? PROFESSOR: Well, there's a lot
of nice questions about things like that. How many processes can you
run at once in your brain? And I was having a sort
of argument the other day about music, and I
was wondering if-- I see a big difference
between Bach and the composers
who do counterpoint. Counterpoint, you usually
have several versions of a very similar idea. Maybe there's one theme,
and you have it playing. And then another voice comes in. And it has that
theme upside down, or a variation of it, or in
some cases, exactly the same. And then it's called a canon. So the tour de force
in classical music is when you have two, or
three, or four versions of the same thought going on
at once at different times. And my feeling was
that in popular music, or if you take a
typical band, then there might be four people. And they're doing different
things at the same time. Usually, not the
same musical tunes. But there's a rhythm,
and there's a tympani. And there's various instruments
doing different things, but you don't have several
doing the same thing. I might be wrong, and
somebody said, well, some popular music has
a lot of counterpoint. I'm just not familiar with it. But I think that's-- if you're trying to
solve a hard problem, it's fairly easy to
look at the problem in several different ways. But what's hard is to
look at it in several almost the same ways that
are slightly different. Because probably, if you
believe that the brain is made of agents, or
resources, or whatever, you probably don't have
duplicate copies of ones that do important things. Because that would take
up too much real estate. Anyway, I might be
completely wrong about jazz. Somebody, maybe
they have just as complicated overlapping
things as Bach and the contrapuntal
composers did. Yeah? AUDIENCE: What is
the ultimate goal of artificial intelligence? Is it some sort of application,
or is it more philosophical? PROFESSOR: Oh, everyone has
different goals or ones. AUDIENCE: In your opinion. PROFESSOR: I think
we're going to need it, because the disaster that
we're working our way toward is that people are
going to live longer. And they'll become
slightly less able, so we'll have billions
of 200-year-old people who can barely get around. And there won't be
enough people to import from underdeveloped
countries to, or they won't be
able to afford them. So we're going to have to have
machines that take care of us. Of course, that's
just a transient. Because at some
point, then you'll download your brain
into a machine and fix everything that's wrong. So we'll need robots for a
few years or a few decades. And then we'll be them, and
we won't need them anymore. But it's an important problem. What's going to happen
in the next 100 years? You're going to have 20 billion
200-year-olds and nobody to take care of them,
unless we get AI. Nobody seems particularly
sad about that. How long-- oh, another anecdote. I was once giving a lecture
and talking about people living a long time. And nobody in the audience
seemed interested, and I'd say, well, suppose
you could live 400 years. And most of the people-- then I asked, what
was the trouble? They said, wouldn't
it be boring? So then I tried it, again, in
a couple of other lectures. And if you ask a bunch
of scientists, how would you like to live
400 hundreds years? Everyone says, yay,
and you ask them why. And they say, well, I'm
working on a problem that I might not
have time to solve. But if I had 400 years, I bet
I could get somewhere on it, and the other people
don't have any goal. That's my cold blooded view
of the typical non-scientist. There's nothing for them
to do in the long run. Who can think of what
should people do? What's your goal? How many of you want
to live 400 years? Wow, there must be
scientists here. Try it on some crowd and
let me know what happens. Are people really afraid. Yeah? AUDIENCE: I think the
differentiating factor is whether or not your
400 years is just going to be the repetition
of 100 years experience, or if it'll start
to like take off, then you'll start
to learn better. You'll progress. PROFESSOR: Right. I've seen 30 issues
of the Big Bang, and I don't look forward
to the next one anymore. Because they're getting
to be all the same. Although, it's the only thing
on TV that has scientists. Seriously, I hardly
read anything, except journals and
science fiction. Yeah? AUDIENCE: What's the
motivation to have robots take care of as we age
as opposed to enhancing our own cognitive abilities,
or our prosthetic body, or something more societiable? What's the joy of living,
if you can't do anything, and somebody takes care of you? PROFESSOR: I can't
think of any advantage, except that medicine
isn't getting-- you know, the age of
unhandicapped people went up at one year every
four since the late 1940s. So the lifespan is-- so that's 60 years. So people are living 15
years longer on the average than they did when I was
born or even more than that. But it's leveled off lately. Now I suspected you only
have to fix a dozen genes, or who knows? Nobody really has
a good estimate, but you can probably double
the lifespan, if you could fix. Nobody knows, but maybe
there's just a dozen processes that would fix a lot of things. And then you could live
longer without deteriorating, and lots of people
might get bored. But they'll self select. I don't know. What's your answer? AUDIENCE: I feel
that AI is more-- the goal is not to help
take care of people, but to complement what we
already have to entertain us. PROFESSOR: You could also look
at them as our descendants. And we will have them replace
us and just as a lot of people consider their children to be
the next generation of them. And I know a lot of people who
don't, so it's not a universal. What's the point of anything? I don't want to get in-- we might be the only intelligent
life in the universe. And in that case,
it's very important that we solve all our
problems and make sure that something
intelligent persists. I think Carl Sagan had
some argument of that sort. If you were sure that
there were lots of others, then it wouldn't
seem so important. Who is the new Carl Sagan? Is there any? Is there a public scientist? AUDIENCE: [INAUDIBLE]. PROFESSOR: Who? AUDIENCE: He's the guy who
is on Nova all the time. PROFESSOR: Oh, Tyson? AUDIENCE: Bryan Green. PROFESSOR: Bryan
Green, he's very good. Tyson is the astrophysicist. Bryan Green is a great actor. He's quite impressive. Yeah? AUDIENCE: When would you say
a routine has sense of self? Like when you think
there's something that like a self inside
us, partly, because there's some processes [INAUDIBLE]. But when would you
say [INAUDIBLE]?? PROFESSOR: Well, I think
that's a funny question. Because if we're programming
it, we can make sure that the machine has a
very good abstract, but correct model of how it
works, which people don't. So people have a sense of self,
but it's only a sense of self. And it's just plain wrong
in almost every respect. So it's a really funny question. Because when you make
a machine that really has a good useful representation
of what it is and how it works, it might be quite different,
have different attitudes than a person does. Like you might not consider
itself very valuable and say, oh, I
could make something that's even better than
me and jump into that. So it wouldn't have the-- it might not have any
self protective reaction. Because if you could
improve yourself, then you don't want not to. Whereas we're in a state,
where there's nothing much we could do, except
try to keep living, and we don't have
any alternative. It's a stupid thing to say. I can't imagine getting tired of
living, but lots of people do. Yeah? AUDIENCE: What do you think
about creative thinking as a way of thinking? And where does this
thinking completely come from or anything
that comes after? PROFESSOR: I had a little
section about that somewhere that I wrote, which was the
difference between artists and scientists or engineers. And engineers have a
very nice situation, because they know
what they want. Because somebody's
ordered them to make a-- in the last month,
three times, I've walked away from my computer. How many of you have a Mac
with the magnetic thing? And three times, I pulled
it by tripping on this, and it fell to the
floor and didn't break. And I've had Macs for 20
odd years or since 1980-- when did they start? 30 years, and they have the
regular jack power supply in the old days. And I don't remember. And usually, when you pull
the cord, it comes out. Here is this cord that Steve
Jobs and everybody designed very carefully, so
that when you pull it, nothing bad would happen. But it does. How do you account for that? AUDIENCE: It used to be
better with the old plugs were perpendicular to the plus,
and now it's kind of-- PROFESSOR: Well, it's
quite a wide angle. AUDIENCE: Right, so it
works at a certain angle. The cable now instead of
naturally lining that area actually naturally lies in the
area where it doesn't work. PROFESSOR: Well, what it
needs is a little ramp, so that it would slide out. I mean, it would only take
a minute to file it down, so that it would slide out. AUDIENCE: Right. PROFESSOR: But they didn't. I forget why I
mentioned that, but-- AUDIENCE: [INAUDIBLE]. PROFESSOR: Right, so
what's the term doing an artist and an engineer? Well, when you do a
painting, it seems to me, if you're already
good at painting, then 9/10ths of the problem
is, what should I paint? So you can think of
an artist as 10% skill and 90% trying to figure out
what the problem is to solve. Whereas for the engineer,
somebody's told him what to do, make a better cabled connector. So he's going to spend 90%
of his time actually solving the problem and only 10% of
the time trying to decide what problem to solve. So I don't see any difference
between artists and engineers, except that the artist
has more problems to solve than he
could possibly solve and usually ends up by
picking a really dumb one, like let's have a
Saint and three angels. Where will I put
the third angel? That's the engineering part. It's just improvising, so to
me, the media lab makes sense. The artists or semi
artists and the scientists are doing almost the same thing. And if you look at
the more arty people, they're a little more concerned
with human social relations and this and that. And others are more concerned
with very technical, specific aspects of signal
processing or semantic representations and so on. So I don't see much
difference between the arts and the sciences. And then, of course,
the great moments are when you run into people,
like Leonardo and Michelangelo, who get some idea that
requires a great new technical innovation that
nobody has ever done. And it's hard to separate them. I think there's some place,
where Leonardo realizes that the lens in the eye would
mean that the image is upside down on the retina, and
he couldn't stand that. So there's a diagram
he has, where the cornea is curved
enough to invert the image, and then the lens
inverts it back again, which is contrary to fact. But he has a sketch showing
that he was worried about, if the image were upside
down on the retina, wouldn't things
look upside down? AUDIENCE: [INAUDIBLE] question. Did you ever heard about
[INAUDIBLE] temporal memory, like-- PROFESSOR: Temporal? AUDIENCE: Temporal
memory, like there is a system that [INAUDIBLE]
at the end of this each year on it. And there's some research. They have a paper on it. PROFESSOR: Well,
I'm not sure what-- AUDIENCE: This is
Jeff Hawkins project? I don't know. Yeah, it's Jeff Hawkins. PROFESSOR: I haven't heard. About 10 years ago, he said-- Hawkins? AUDIENCE: Yeah, Hawkins. PROFESSOR: Yeah, well, he was
talking about 10 years ago, how great it was, and I haven't
heard a word of any progress. Is there some? Has anybody heard-- there's
a couple of books about it. But I've never seen any
claim of that it works. They wrote a ferocious review
of the Society of Mind, which came out in 1986. And the Hawkins
group existed then and had this talk about a
hierarchical memory system. AUDIENCE: [INAUDIBLE]. PROFESSOR: As far as I can
tell, it's all a bluff. Nothing happened. I've never seen a report that
they have a machine, which solved a problem. Let me know if you
find one, because-- oh well. Hawkins got really mad at
me for pointing this out, but I was really mad at him for
having four of his assistants write a bad book
review of my book. So I hope we were even. If anybody can
find out whether-- I forget what it's called. Do remember its name? AUDIENCE: [INAUDIBLE]. PROFESSOR: Well, let's find
out if it can do anything yet. Hawkins is wealthy enough to
support it for a long time, so it should be good by now. Yes? AUDIENCE: Do you think that's
going to solve the problem? People first start out with some
sort of classification in their of the kind of problem it
is, or is that not necessary? PROFESSOR: Yes, well,
there's this huge book called Human Problem
Solving, which I don't know how
many of you know the names of Newell and Simon. Originally, it was
Newell, Shaw, and Simon. Believe it or not,
in the late 1950s, they did some of the first
really productive AI research. And then, I think,
in 1970, so that's sort of after 12 years of
discovering interesting things. Their main discovery
was the gadget that they called GPS, which
is not global positioning satellite, but general
problem solver. And you can look it up
in the index of my book, and there's a sort of one
or two page description. But if you ever get
some spare time, search the web for their early
paper by Newell and Simon on how GPS worked. Because it's really fascinating. What it did is it
looked at a problem, and found some features of it,
and then looked up in a table saying that, if there's
this difference between what you have and what you want,
use such and such a method. So it was sort of
what I called it. I renamed it a difference
engine as a sort of joke, because the first
computer in history was the one called
the difference engine. But it was for predicting
tides and things. Anyway, they did
some beautiful work. And there's this big book,
which I think is about 1970, called Human Problem Solving. And what they did is got some
people to solve problems, and they trained the people
to talk while they're solving the problem. So some of them were
a little cryptograms, like if each letter stands for
a digit, I've forgotten it. Pat, do you remember the
name, one of those problems? John plus Joe-- John plus Jane equals
Robert or something. I'm sure that has no
solution, but those are called cryptarithmetic. So they had dozens
or hundreds of people who would be trained to talk
aloud while they're solving little puzzles like that. And then what they did was look
at exactly what the people said and how long they took. And in some cases, where
they move their eyes, they had an eye
tracking machine. And then they
wrote programs that showed how this
guy solved a couple of these cryptarithmetic
problems. Then they ran the
program on a new one. And in some rare
cases, it actually solved the other problem. So this is a book, which
looks at human behavior and makes a theory
of what it's doing. And the output is a
rule based system, so it's not a very
exciting theory. But there's never been
anything like it in-- you know, it was like Pavlov
discovering conditioned reflexes for rats or dogs. And Newell and Simon are
discovering some rather higher level almost a Rodney
Brooks like system for how humans solve some
problems that most people find pretty hard. Anyway, what there
hasn't been is much-- I don't know of any follow-up. They spent years perfecting
those experiments, and writing about-- [AUDIO OUT] --results. And anybody know
anything like that? What psychologists are trying to
make real models of real people solving [INAUDIBLE] problems. [INAUDIBLE] AUDIENCE: Your mic [? is off. ?] PROFESSOR: It has a green light. AUDIENCE: It has a green
light, but the switch was up. PROFESSOR: Boo. Oh, [INAUDIBLE]. AUDIENCE: We're all set now. PROFESSOR: [CHUCKLES] Yes. AUDIENCE: Did that
[INAUDIBLE] study try to see when a person gave up
on a particular problem-solving method [INAUDIBLE] how they
switched-- in other words, when they switched
to [INAUDIBLE]?? PROFESSOR: It has
inexplicable points at which the person
suddenly gives up on that representation. And he says, oh, well,
I guess R must be 3. Did I erase? Well. Yes, it's got episodes, and
they can't account for the-- you have these little
jerks in the script where the model changes. And-- [COUGHS] sorry. And they announced
those to be mysteries, and say, here's a place
where the person has decided the strategy isn't
working and starts over, or is changing something. The amazing part is that
their model sometimes fits what the person says. For 50 or even 100
steps, the guy's saying, oh, I think z must
be 2 and p must be 7. And that means p plus z is
9, and I wonder what's 9. And so their model fits
for very long strings, maybe two minutes of the
person mumbling to themselves. And then it breaks, and then
there's another sequence. So Newell actually
spent more than a year after doing it verbally,
at tracking the person's eye motions, and trying to
correlate the person's eye motions with what the
person was talking about. And guess what? None. AUDIENCE: [CHUCKLING] PROFESSOR: It was almost as
though you look at something, and then to think about
it, you look away. Newell was quite distressed,
because he spent about a year crawling over this data
trying to figure out what kinds of mental events
caused the eyes to change what they were looking at. But when the problem
got hard, you would look at a blank
part of the thing more often than the place
where the problem turned up. So conclusion, that didn't work. When I was a very young
student in college, I had a friend named
Marcus Singer, who was trying to figure out how the
nerve in the forelimb of a frog worked. And so he was
operating on tadpoles. And he spent about
six weeks moving this sciatic nerve from the leg
up to the arm of this tadpole. And then they all got
some fungus and died. So I said, what are
you going to do? And he said, well, I guess
I'll have to do it again. And I switched from
biology to mathematics. AUDIENCE: [CHUCKLING] PROFESSOR: But in fact, he
discovered the growth hormone that he thought came from
the nerve and made the-- if you cut off the limb bud of a
tadpole, it'll grow another one and grow a whole-- it was a newt, I'm sorry. It's salamander. It'll grow a new hand. If you wait till it's
got a substantial hand, it won't grow a new one. But he discovered the hormone
that makes it do that. Yeah. AUDIENCE: One of the questions
from the homework that relates to problem-solving. A common theme is
having multiple ways to react to the same problem. But how do we
choose which options to add as possible reactions
to the same problem? PROFESSOR: Oh. So we have a whole
lot of if-thens, and we have to choose which if. I don't think I have
a good theory of that. Yes, if you have a huge
rule-based system and they're-- what does Randy Davis do? What if you have a rule-based
system, and a whole lot of ifs fit the condition? Do you just take the one
that's most often worked? Or if nothing seems to
be working, do you-- you certainly don't want to
keep trying the same one. I think I mentioned
Doug [? Lenat's ?] rule. Some people will assign
probabilities to things, to behaviors, and
then pick the way to react in proportional
to the probability that that thing has
worked in the past. And Doug [? Lenat ?]
thought of doing that, but instead, he just put
the things in a list. And whenever a hypothesis
worked better than another one, he would raise it, push it
toward the front of the list. And then whenever there was
a choice, he would pick-- of all the rules that
fit, he would pick the one at the top of the list. And if that didn't work,
it would get demoted. So that's when I became an
anti-probability person. That is, if just
sorting the things on a list worked pretty
well, our probability's going to do much better. No, because if you do
probability matching, you're worse off than-- than what? AUDIENCE: [INAUDIBLE] PROFESSOR: Ray
Solomonoff discovered that if you have a
set of probabilities that something
will work, and you have no memory, so that each
time you come and try the-- I think I mentioned
that the other day, but it's worth emphasizing,
because nobody in the world seems to know it. Suppose you have
a list of things, p equals this, or that, or that. In other words, suppose
there's 100 boxes here, and one of them has a gold brick
in it, and the others don't. And so for each box, suppose
the probability is 0.9 that this one has the gold
brick, and this one as 0.01. And this has 0.01. Let's see, how many of them-- so there's 10 of these. That makes-- Now, what should you do? Suppose you're allowed
to keep choosing a box, and you want to get your gold
brick as soon as possible. What's the smart thing to do? Should you-- but
you have no memory. Maybe the gold brick
is decreasing in value, I don't care. But so should you keep trying
0.9 if you have no memory? Of course not. Because if you don't
get it the first time, you'll never get it. Whereas if you tried them
at random each time, then you'd have 0.9 chance of
getting it, so in two trials, you'd have-- what am I saying? In 100 trials, you're
pretty sure to get it, but in [? e-hundred ?]
trials, almost certain. So if you don't have any memory,
then probability matching is not a good idea. Certainly, picking the
highest probability is not a good idea,
because if you don't get it the first
trial, you'll never get it. If you keep using the
probabilities at-- what am I saying? Anyway, what do you think
is the best thing to do? It's to take the square
roots of those probabilities, and then divide them by
the sum of the square roots so it adds up to 1. So a lot of psychologists
design experiments until they get the [? rat ?] to
match the probability. And then they publish it. Sort of like the-- but if the animal is optimal
and doesn't have much memory, then it shouldn't match the
probability of the unknown. It should-- end of story. Every now and then, I
search every few years to see if anybody has
noticed this thing, which-- and I've never
found it on the web. Yeah. AUDIENCE: So earlier
in the class, you mentioned that the
rule-based methods didn't work, and that several
other methods were tried between the
[INAUDIBLE] [? immunities. ?] Could you go into a bit about
what these other methods were that have been tried? PROFESSOR: I don't mean
to say they don't work. Rule-based methods are great
for some kinds of problems. So most systems make
money, and if you're trying to make
hotel reservations and things, this business
of rule-based systems, it has a nice history. A couple of AI researchers,
really, notably Ed Feigenbaum, who was a student
of Newell and Simon, started a company for
making rule-based systems. And company did pretty
well for a while, and they maintained
that only an expert in artificial intelligence
could be really good at making rule-based systems. And so they had a
lot of customers, and quite a bit of
success for a year or two. And then some people
at Arthur D. Little said, oh, we can do that. And they made some
systems that worked fine. And the market disappeared,
because it turned out that you didn't have
to be good at anything in particular to make
rule-based systems work. But for doing harder
problems, like translating from one language to
another, you really needed to have more structure,
and you couldn't just take the probabilities of
words being in a sentence, but you had to look for
diagrams and trigrams, and have some grammar
theory, and so forth. But generally, if you have
a ordinary data-processing problem, try a
rule-based system first, because if you understand
what's going on, it's a good chance you'll
get things to work. I'm sure that's what the
Hawkins thing started out as. I don't have any questions. AUDIENCE: Could I ask another
one for the homeworks? PROFESSOR: Sure. AUDIENCE: OK. Computers and machines
can use relatively few electronic components to run
a batch of different type of thought operations. All that changes is data over
which the operation runs. In the [? critics ?]
[? lecter ?] model, are resources different bundles
of data or different physical parts of the brain? PROFESSOR: Which model? AUDIENCE: The [? critics ?]
[? lecter ?] model. PROFESSOR: Oh. Actually, I've never seen
a large-scale theory of how the brain connects its-- there doesn't seem to be
a global model anywhere. Anybody read any
neuroscience books lately? AUDIENCE: [CHUCKLING] PROFESSOR: I mean, I just
don't know of any big diagrams. Here's this wonderful
behavioral diagram. So how many of you have run
across the word "ethology?" Just a few. There's a branch
of the psychology of animals, which is-- AUDIENCE: [CHUCKLING] PROFESSOR: Thanks. Which is called ethology. And it's the study of
instinctive behavior. And the most famous
people in that field-- who? Well, Niko Tinbergen and Konrad
Lorenz are the most famous. I've just lost the name
of the guy around the 1900 who wrote a lot about
the behavior of ants. Anybody ring a bell? So he was the first ethologist. And these people don't study
learning because it's hard to-- I don't know why. So they're studying
instinctive behavior, which is, what are the
things that all fish do of a certain species? And you get these big diagrams. This is from a little book
which you really should read called The Study of Instinct. And it's a beautiful book. And if that's not
enough, then there's a two-volume similar
book by Konrad Lorenz, who was a Austrian researcher. They did a lot of stuff
together, these two people. And it's full of diagrams
showing the main behaviors that they were able to observe
of various low-cost animals. I think I mentioned
that I had some fish, and I watched the
fish tanks, what they were doing for
a very long time, and came to no
conclusions at all. And when I finally read
Tinbergen and Lorenz, I realized [? that ?] just
had never occurred to me to guess what to look for. My favorite one was that
whenever a fire engine went by, Lorenz's sticklebacks,
the male sticklebags would go crazy and
look for a female. Because when the female's in
heat, or whatever it's called-- estrus-- the lower
abdomen turns red. I think fire engines have
turned yellow recently, so I don't know what the
sticklebacks do about that. So if you're interested
in AI, you really should look at at least
one of these people, because that's the
first appearance of rule-based systems in
great detail in psychology. There weren't any computers yet. There must be 20 questions left. Yeah. AUDIENCE: While we're in
the topic of ethology, so I know that early on,
people were kind of-- they were careful
not to apply ethology to humans until about '60s
EO Wilson with sociobiology. So I was wondering about
your opinion on that, and maybe you have
anecdotes on [INAUDIBLE] pretty controversial around
this area especially. PROFESSOR: Oh, I don't know. I sort of grew up with
Ed Wilson because we had the same fellowship at
Harvard for three years. But he was almost
never there, because he was out in the jungle in
some little telephone booth watching the birds,
or bees, or-- he also had a 26-year-old ant. Aunt, not ant. Ant. A-N-T. I'm not sure what the
controversy would have been, but of course, there would
be humanists who would say people aren't animals, but. But then what the
devil are they? Why aren't they
better than the-- [CHUCKLES] You've got to read this. It's a fairly short book. And you'll never see
an animal as the same again, because I
swear, you start to notice all these
little things. You're probably
wrong, but you start picking up little
pieces of behavior, and trying to figure out what
part of the instinct system is it. Lorenz was
particularly-- I think in chapter 2 of the
emotion machine, I have some quotes
from these guys. And Lorenz was particularly
interested in how animals got attached to their parents-- that is, for those
animals that do get attached to their parents. Like alligator babies live
in the alligator's mouth for quite a while. It's a good, safe place. And Lorenz would catch birds
just when they're hatching. And within the first day
or so, some baby birds get attached to whatever
large moving object is nearby. And that was often Konrad
Lorenz, rather than the bird's mother,
who is supposed to be sitting on the
egg when it hatches, and the bird gets
attached to the mother. Most birds do, because they
have to stay around and get fed. So it is said that wherever
Lorenz went in Vienna, there were some
ducks or whatever-- birds that had gotten
imprinted on him would come out of the sky and land on his
shoulder, and no one else. And he has various theories
of how they recognize him. But you could do that too. Anyway, that was quite a field,
this thing called ethology. And between 1920 and 1950-- 1930, I guess, 1950-- there were lots
of people studying the behavior of animals. And Ed Wilson is probably
the most well-known successor to Lorenz and Tinbergen. And
I think he just wrote a book. Has anybody seen it? He has a huge book
called Sociobiology, which is too heavy to read. I've run out of things. Yes. AUDIENCE: Still thinking about
the question [INAUDIBLE].. [INAUDIBLE],, The
Society of Mind, ideas in that book, [INAUDIBLE]
the machinery from it. What would the initial state
of the machinery be [INAUDIBLE] start something? Is that dictated by
the goals given to it? And by state, I mean the
different agents, the resources they have access to. What would that initial
state look like? PROFESSOR: He's asking if you
made a model of the program to Society of Mind
architecture, what would you put in it to start with? I never thought about that. Great question. I guess it depends
whether you wanted to be a person, or a marmoset,
or chicken, or something. Are there some animals
that don't learn anything? Must be. What do the ones that
Sydney Brenner studied? AUDIENCE: C. elegans? They [? learned ?] very
simple associations. PROFESSOR: The little worms? AUDIENCE: Mm-hmm. PROFESSOR: There was a rumor
that if you fed them RNA-- was it them or was it some
slightly higher animal? AUDIENCE: It was worms. PROFESSOR: What? AUDIENCE: RNA interference. Is that what you're
talking about? Yeah. PROFESSOR: There was one
that if you taught a worm to turn left when there was
a bright light, or right, and put some of its
RNA into another worm, that worm would copy
that reaction even though it hadn't been trained. And this was-- AUDIENCE: That wasn't worms. That was slugs. PROFESSOR: Slugs. AUDIENCE: I think it
was [INAUDIBLE] replace the [INAUDIBLE] or something. AUDIENCE: Some little
snail-like thing. And nobody was ever
able to replicate it. So that rumor spread around
the world quite happily, and there was a great
science fiction story-- I'm trying to remember-- in which somebody got
to eat some alien's RNA and got magical powers. AUDIENCE: [CHUCKLING] PROFESSOR: I think
it's Larry Niven, who is wonderful at taking
little scientific ideas and making a novel out of them. And his wife Marilyn was
a undergraduate here. So she introduced me
to Larry Niven, and-- I once gave a lecture
and he wrote it up. It was one of the big
thrills, because Niven is one of my heroes. Imagine writing a book with a
good idea in every paragraph. AUDIENCE: [CHUCKLING] Vernor Vinge, and Larry
Niven, and Frederik Pohl seem to be able to do that. Or at least on every page. I don't know about
every paragraph. Yeah. AUDIENCE: To follow
up on that question, it seems to me that
you almost were saying that if this
machinery exists, the difference between
these sort of animals would be in [INAUDIBLE]. And I think on
[INAUDIBLE],, we can create like a chicken
or a human [INAUDIBLE].. PROFESSOR: Well, no. I don't think that most
animals have scripts. Some might, but I'd say that-- I don't know where
most animals are, but I sort of make
these six levels, and I'd say that
none of the animals have this top self-reflective
layer except, for all we know, dolphins, and
chimpanzees, and whatever. It would be nice to know
more about octopuses, because they do so
much wonderful things with their eight legs. How does it manage? Have you seen pictures of an
octopus picking up a shell, and walking to some quiet
place, and it's got-- there's some movies
of this on the web. And then it drops the shell and
climbs under it and disappears. It's hard to imagine
programming a robot to do that. Yeah. AUDIENCE: So I've noticed, both
in your books and in lecture, a lot of your
models and diagrams seem to have very hierarchical
structure to them. But as you [INAUDIBLE] in
your book and other places, passing between [INAUDIBLE]
feedback and self-reference are all very
important [INAUDIBLE].. So I'm curious if
you can discuss some of the uses of these very
hierarchical models, why you represented so many
things that way instead of [INAUDIBLE] theorem. PROFESSOR: Well, it's probably
very hard to debug things that aren't. So we need a meta theory. One thing is that,
for example, it looks like that all neurons
are almost the same. Now, there's lots of difference
in geometric features of them, but they all use the same
one or two transmitters, and every now and then, you
run across people saying, oh, neurons are
incredibly complicated. They have 100,000 connections. You can find it if you just
look up "neuron" on the web and get these essays explaining
that nobody will ever understand them,
because typically, a neuron is connected to 100,000
others, and blah, blah, blah. So it must be something
inside the neuron that figures out all this stuff. As far as I can see, it looks
out almost the opposite. Namely, probably
the neuron hasn't changed for half a
billion years very much, except in superficial
ways in which it grows. Because if you changed any
of the genes controlling its metabolism or the way
it propagates impulses, then the animal would
die before it was born. And so you can't make-- that's why the embryology of
all mammals is almost identical. You can't make a change at
that level after the first-- before the-- you
can't make changes before the first generations
of cell divisions, or everything
would be clobbered. The architecture would
be all screwed up. So I suspect that the
people who say, well, maybe the important
memories of a neuron are inside it, because there's
so many fibers and things. I bet it's sort of like
saying the important memory in a computer is in the
arsenic and phosphorus atoms of the semiconductor. So I think things have to be
hierarchical in evolution, because if you're building later
stuff on earlier stuff, then it's very hard to make any
changes in the earlier stuff. So as far as I know, the
neurons in sea anemones are almost identical to
the neurons in mammals, except for the later
stages of growth, and the way the
fibers ramify, and-- who knows, but there
are many people who want to find the
secret of the brain in what's inside the
neurons rather than outside. It'd be nice to get a
textbook on neurology from 50 years in the
future, see how much of that stuff mattered. Where's our time machines? Did you have-- AUDIENCE: Yeah. Most systems have a state
that they prefer to be in, like a state that
they're most comfortable. Do you think the mind
has such a state, or would it tend to certain
places or something? PROFESSOR: That's
an interesting. I don't-- how does that
apply to living things? I mean, this battle would
rather be here than here, but I'm not sure what you mean. AUDIENCE: Well, so apparently,
in Professor Tenenbaum's class, he shows this example
of a number game. They'll give you a
sequence of numbers, and he'll ask you to
find a pattern in it. So for example, if you had
a pattern like 10, 40, 50, and 55, he asked
the class to come up with different things that could
be described in the sequence. And between the choice
of, oh, this sequence is a sequence of
the multiples of 5 versus a sequence of the matter
of 10 or multiples of 11, he says something like-- he phrases it like,
the multiples of 5 would have a higher
[INAUDIBLE] probability. So that got me thinking,
why would that be-- would our minds
have a preference for having as few categories
as possible in trying to view the world around us,
trying to categorize things in as few things as possible is
what got me thinking about it. PROFESSOR: Sounds very
strange to me, but certainly, if you're going to generate
hypotheses, you have to have-- the way you do it
depends on what this-- what does this
problem remind you of? So I don't see how you
could make a general-- if you look at the
history of psychology, there are so many efforts to
find three laws of motion like Newton's. Is he trying to do that? I mean, here you're talking
about people with language, and high-level semantics, and-- let's ask him what he meant. AUDIENCE: Professor [INAUDIBLE]. PROFESSOR: Yeah. AUDIENCE: This is more
of a social question, but there's always
this debate about how if AI gets to the point where
it can take care of humans, will it ever destroy humanity? And do you think that's
something that we should fear? And if so, is there some
way we can prevent it? PROFESSOR: If you
judge by the recent-- by what's happened
in AI since 1980, it's hard to imagine
anything to fear. But-- AUDIENCE: [CHUCKLING] PROFESSOR: But-- funny
you should mention that. I'm just trying to organize a
conference sometime next year about disasters. And there's a nice book
about disasters by-- what's his name? The Astronomer Royal. What? AUDIENCE: Martin Rees? PROFESSOR: Martin Rees. So he has a nice book, which
I just ordered from Amazon, and it came the next day. And it has about 10
disasters, like a big meteor coming and hitting the Earth. I forget the other 10, but
I have it in here somewhere. So I generated another
list of 10 to go with it. And so there are lots of bad
things that could happen. But I think right
now, that's not on the top of the
list of disasters. Eventually, some
hacker ought to be able to stop the
net from working because it's not very secure. And while you're at
it, you could probably knock out all of the
navigation satellites and maybe set off a
few nuclear reactors. But I don't think AI is the
principal thing to worry about, but it should very suddenly
get to be a problem. And there are lots of good
science fiction stories. My favorite is the Colossus
series by DF Jones. Anybody know-- there was a
movie called The Forbin Project, and it's about somebody
who builds an AI, and it's trained to
do some learning. And it's also the
early days of the web, and it starts talking to
another computer in Russia. And suddenly, it gets
faster and faster, and takes over all the
computers in the world, and gets control of all the
missiles, because they're linked to the network. And it says, I will destroy
all the cities in the world unless you clear off
some island and start building the following machine. I think it's Sardinia
or someplace. So they get bulldozers. And it starts building
another machine, which it calls Colossus 2. And they ask, what's
it going to do? And Colossus says,
well, you see, I have detected that there's
a really bad AI out in space, and it's coming this way,
and I have to make myself smarter than it really quick. Anyway, see if you can order
the sequel to Colossus. That's the second volume where
the invader actually arrives and I forget what happens. And then there's
a third one, which was an anticlimax,
because I guess DF Jones couldn't
think of anything worse that could happen. AUDIENCE: [CHUCKLING] PROFESSOR: But Martin Rees can. Yeah. AUDIENCE: Going back to
her question about example, and if a mind has a state
that it prefers to be in, would that example be more of
a pattern recognition example? So instead of 10,
40, 50, 55, what if it was [? logistical, ?]
like, good, fine, great, and you have to come up with
a word that could potentially fit in that pattern. And then that pattern could be
ways to answer "how are you?" PROFESSOR: Let's
do an experiment. How many of you have
a resting state? AUDIENCE: [INAUDIBLE] PROFESSOR: Sometimes when
I have nothing else to do, I try to think of "Twinkle
Twinkle, Little Star" happening with the second one
starting in the second measure, and then the third one starts
up in the third measure. And when that happens, I
start losing the first one. And ever since I was a baby,
when I have nothing else to do-- which is almost never-- I try to think of three versions
of the same tune at once and usually fail. What do you do when you
have nothing else to do? Any volunteers? What's yours? AUDIENCE: I try not to
think anything at all. See how long [INAUDIBLE]. PROFESSOR: You
try not to, or to? AUDIENCE: Not to. PROFESSOR: Isn't that a
sort of a Buddhist thing? AUDIENCE: Guess so. PROFESSOR: Do you ever succeed? How do you get out of it? You have to think, well,
enough of this nothingness. If you succeeded,
wouldn't you be dead? AUDIENCE: [CHUCKLING] PROFESSOR: Or stuck? AUDIENCE: Eventually,
some stimulus will appear that is too
interesting to ignore. AUDIENCE: [CHUCKLING] PROFESSOR: Right,
and the threshold goes down till even the most
boring thing is fascinating. AUDIENCE: Yeah. AUDIENCE: [CHUCKLING] PROFESSOR: Make a
good short story. Yeah. AUDIENCE: There was
actually a movie that really got to me when I was little. These aliens were trying to
infiltrate people's brains, and like their thoughts. And to keep the aliens from
infiltrating your thoughts, you had to think of
a wall, which didn't make any sense at all, but-- AUDIENCE: [CHUCKLING] AUDIENCE: But now, whenever
I try to think of nothing, I just end up
thinking of a wall. AUDIENCE: [LAUGHING] PROFESSOR: There are these awful
psychoses, and about every bout every five years, I get
an email from someone who says that, please
help me, there's some people who are putting
these terrible ideas in my head. Have you ever gotten one, Pat? And they're sort of
scary, because you realize that maybe the
person will suddenly figure out that it's you
who's doing it, if they-- AUDIENCE: [CHUCKLING] AUDIENCE: [INAUDIBLE]
husband [INAUDIBLE] all them together once,
and I think they married. AUDIENCE: [LAUGHING] PROFESSOR: I remember
there was once-- one of them came to visit-- actually showed up, and he came
to visit Norbert Wiener, who is famous for-- I mean, he's the cybernetics
person of the world. And this person
came in, and he got between Wiener and
the door, and started explaining that somebody was
putting dirty words in his head and making the grass
on their lawn die. And he was sure it was
someone in the government. And this was getting
pretty scary. And I was near the door, so
I went and got [INAUDIBLE]---- it's a true story--
who was nearby, and I got [INAUDIBLE]
to come in. And [INAUDIBLE] actually talked
this guy down, and took him by the arm, and went somewhere,
and I don't know what happened, but Wiener was really
scared, because the guy kept keeping him from going out. [INAUDIBLE] was big. Wiener's not very big. AUDIENCE: [CHUCKLING] PROFESSOR: Anyway,
that keeps happening. Every few years, I get one. And I don't answer them. He's probably sending
it to several people. And I'm sure one of them is
much better at it than we are. How many of you have ever had
to deal with a obsessed person? How did they find you? AUDIENCE: I don't know. They found a number of people
in the media lab, actually. PROFESSOR: Don't
answer anything. But if they actually come,
then it's not clear what to do. Last question? Thanks for coming.
This is great. I get ASMR from people's voices most and this worked well. Thanks for the post. Also if this got you asmr, try watching this https://www.youtube.com/watch?v=qw_Iwcos8aQ