The following content is
provided under a Creative Commons license. Your support will help
MIT OpenCourseWare continue to offer high quality
educational resources for free. To make a donation or to
view additional materials from hundreds of MIT courses,
visit MIT OpenCourseWare at ocw.mit.edu. MARVIN MINSKY: I presume
everyone has an urgent question to ask. Maybe I'll have to
point to someone. AUDIENCE: One over there. MARVIN MINSKY: Oh, good. AUDIENCE: So [INAUDIBLE]
exactly what's said, but you said that maybe
the [INAUDIBLE] lights are associated to the glial cells. Is that right? MARVIN MINSKY: Oh, I don't
want to speculate on how the brain works, because-- [LAUGHTER] because there's
this huge community of neuroscientists who
write papers about-- they're very strange papers
because they talk about how maybe it's not the neuron. And I've just
downloaded a long paper by someone whose name I
won't mention about the idea that a typical neuron
has 100,000 connections. And so something
awesomely important must go on inside
the neuron's body. And it's got all these
little fibers and things. And presumably, if it's
dealing with 100,000 signals or something, then it
must be very complicated. So maybe the neuron isn't
smart enough to do that. So maybe the other cells
nearby that support the neurons and feed them and send chemicals
to and fro around there have something to do with it. How many of you have
read such articles? It's a very strange
community, because-- I think the problem is that
history of that science started first it was
generally thought that all the neurons were connected. And then around 1890
was the first clear idea that nerve cells
weren't arranged in a continuous network. I think it was generally
believed that they were all connected to each other,
because as far as you could tell with the
microscopes of the time it didn't show enough. And then the hypothesis that
the neurons are separate and there are little
gaps, called synapses, as far as I can tell
started around the 1890s. And from then on,
as far as I can see, neurology and psychology
became more and more separate. And the neurologists got
obsessed with chemicals, hormones, epinephrine, and there
are about a dozen chemicals involved that you can detect
when parts of the brain are activated. And so a whole bunch
of folklore grew up over about the roles
of these chemicals. And one thought
of some chemicals as inhibitory and excitatory. And that idea still spreads,
although what we know about the nervous system now-- and I
think I mentioned this before-- is that in general if you
trace a neural pathway from one part of the brain
to another, what happens is that the connections tend
to alternate, not always, but frequently. So that this connection
might inhibit this neuron. And then you look at the
output of that neuron, and that might tend
to excite neurons in the next brain center. And then most of those
cells would tend to inhibit. I mean, each brain center gets
inputs from several others. And so it's not
that a brain center is excitatory or inhibitory,
but the connections from one brain center to another
tend to have this effect. And that's probably
necessary from a systems dynamic point of view,
because if all neurons tended to either do nothing or
excite the next brain center, then what would happen? Soon as you got a certain
level of excitement, then more and more brain
centers would get activated. And the whole thing
would explode. And that's more or
less what happens in an epileptic
seizure, where if you get enough electrical and
chemical activity of one kind or another, mostly electrical-- I think, but I don't know-- then whole large
parts of the brain start to fire synchronicity. And the thing spreads very
much like a forest fire. So that's a long rant. I guess I've repeated
it several times. But it's hard to communicate
with that community, because they really want to
find the secret of thinking and knowledge in
the brain cells, rather than in the architecture
of the interconnections. So my inclination is to find
an intermediate level, such as, at least in the cortex, which
is what distinguishes the-- does it start in mammals? AUDIENCE: I think so. MARVIN MINSKY: I think if-- rather than a
neurology book, I'm thinking of Carl Sagan's
book, which there's is a sort of triune theory
that's very popular, which is that the brain consists
of three major divisions. And the-- I forget what the
lowest level one is called, but the middle level is sort
of the amphibian and then the mammalian and-- it's in the
mammalian development that large parts of
the brain are cortexed. And the cortex isn't so much
like a tangled neural net. But it's divided
mainly into columns. And each column, these
vertical columns, tend to have six
or seven layers. I think six is the standard. And the whole thing is-- what is it about 4 millimeters? 4 or 5 millimeter thick,
maybe a little more. And in each of these
columns, there's major columns, which
have about 1,000 neurons. And one of these
columns is made up maybe 10 or 20 of these mini
columns that are 50 or 100 or whatever. And so my inclination is
to suspect that since these are the animals that think
and plan many steps ahead and do all the sorts of things
we take for granted in humans, that we want to look there
for the architecture of memory and problem-solving systems. In the animals
without cortexes, you can account for most
of their behavior in terms of fairly low-level,
immediate stimulus response reflexes and large major
states, like turning on some parts of some big
blocks of these reflexes when it's hungry and
turn on other blocks when there's an environmental
threat and so forth or whatever. Anyway, I forget what-- yes? AUDIENCE: So in
Chapter 3 you talk about the stages we go do when
we face something like your car breaks down and you
can't go to work. That's the example
given in the book. I'm wondering, how do we
decide how we transition from one stage to another? And why do you go through the
stages of denial, bargaining, like frustration,
depression, and then like only the last
stage seems productive? I guess, my main
question is how do we decide that we should
transition from stage to another from [INAUDIBLE] MARVIN MINSKY: That's
a beautiful question. I think it's fairly well
understood in the invertebrates that there are different
centers in the brain for different activities. And I'm not sure
how much is known about how these things switch. How does an animal decide
whether it's time to-- for example, most animals are
either diurnal or nocturnal. So some stimulus comes along,
like it's getting dark, and a nocturnal animal
might then start waking up. And it turns on some
part of the brain, and it turns off
some other parts. And it starts to sneak
around looking for food or whatever it does at night. Whereas a diurnal animal,
when it starts to get dark, that might trigger some
brain center to turn on, and it looks for its place
to sleep and goes and hides. So some of these are
due to external things. Then, of course,
they're internal clocks. So for lots of
animals, if you put it in a box that's
dimly illuminated and it has a 24-hour
cycle of some sort, it might persist in that
cycle for quite a few days and go to sleep every 24 hours
for half the time and so on. A friend of mine once decided
he would see about this. And it's a famous AI theorist
named Ray Solomonoff. And he put black paint
on all his windows. And found that he had
a 25 or 26-hour natural cycle, which was very nice. And this persisted
for several months. I had another friend who
lived in the New York subways, because his apartment
was in a building that had an entrance to the subway. And he stayed out of
daylight for six months. But anyway, he too
found that he preferred to be on a 25 or
26-hour day than 24. I'm rambling. But we apparently have
several different systems. So there's dead
reckoning system, where some internal clocks
are regulating your behavior. And then there are
other systems where your people are very much
affected by the amount of light and so forth. So we probably have
four or five ways of doing almost everything
that's important. And then people get
various disorders where some of
these systems fail. And a person doesn't have
a regular sleep cycle. And there are disorders
where people fall-- what's it called when you
fall asleep every few minutes? AUDIENCE: Narcolepsy. MARVIN MINSKY:
Narcolepsy and all sorts of wonderful disorders just
because the brain has evolved so many different ways of doing
anything that's very important. Yeah? AUDIENCE: Can you describe
the best piece of criticism for the society of mind theory? MARVIN MINSKY:
Best piece of what? AUDIENCE: The best criticism. MARVIN MINSKY: Oh. It reminds me of
the article I recent read about the possibility
of a virus for-- what's the disorder where-- AUDIENCE: Alzheimer's. MARVIN MINSKY: No. The-- uh-- [LAUGHTER]
actually, there isn't any generally accepted
cause for Alzheimer's, as far as I know. What? AUDIENCE: Somebody
just did an experiment where they injected Alzheimer
infected matter into someone, and they got the same plaque. MARVIN MINSKY:
Oh, well, right, I wonder if that's
a popular theory. No, what's the
one where people-- AUDIENCE: Fibromyalgia. MARVIN MINSKY: Say it again. AUDIENCE: Fibromyalgia. MARVIN MINSKY: Yes, right. That's right, which is not
recognized by most theorists to be a definite disease. But there's been an
episode in which somebody-- I forget what her name is-- was pretty sure that she
had found a virus for it. And every now and then
somebody revives that theory and tries to get
more evidence for it. Anyway, there must be disorders
where the programming is bad, rather than a biochemical
disorder, because whatever the brain is, the
adult brain certainly has a very large
component of what we would, in any other case,
consider to be software. Namely lots of things that
you've learned, including ways for one part of the
brain to discover how to modulate or turn on or turn
off other parts of the brain. And since we've only
had this kind of cortex for 4 or 5 million
years, it's probably still got lots of bugs. Evolution never knows what-- when you make a
new innovation, you don't know what's going to come
after that that might find bugs and ways to get
short-range advantages, short-term advantages
at the expense of longer-term advantages. So lots of mental diseases
might be software bugs. And a few of them
are known to be connected to abnormal secretions
of chemicals and so forth. But even in those cases,
it's hard to be sure that the overproduction
or underproduction of a neurologically
important chemical is-- what should I call it-- a biological disorder or
a functional disorder, because some part of
the nervous system might have found some trick
to cause abnormal secretions of some substance. That's the sort of
thing that we can expect to learn a
great deal more about in the next generation because
of the lower cost and greater resolution of brain
scanning techniques and-- what's his name--
and new synthetic ways of putting in fluorescent
chemicals into a normal brain without injuring it
much, so that you can now do sort of macro chemical
experiments of seeing what chemicals are being
secreted in the brain with new kinds of
scanning techniques. So neuroscience is going
to be very exciting in the next generation with
all the great new instruments. As you know, my complaint
is that somehow introduction to the-- I'm not saying any of the
present AI theories have been confirmed to tell you that the
brain works as such and such a rule-based system
or such and such a-- or use Winston-type
representations or Roger Shank-type
representations or scripts or
frames or whatever. And the next to last chapter
of the motion machine sort of summarizes I think
almost a dozen different AI theories of ways to
represent knowledge. Nobody has confirmed that
any of those particular ideas represent what happens
in a mammalian brain. And the problem to me is that
the neuroscience community just doesn't read that stuff
and doesn't design experiments to look for them. David has been moving from
computer science and AI into that. So he's my current
source of knowledge about what's happening there. Have any of you been following
contemporary neuroscience? That's strange. Yeah? AUDIENCE: So you already talked
about software a little bit. So I think they
analyze Eisen brain. And I realize like that's
why I talk about glial cells. And maybe he had a lot of more
glial cells than normal humans. And so do believe that
the intelligence of humans is like more of the software
side or on the hardware side? Like we have computers that
are very, very powerful, where we create software that
we can run these machines that reproduce like humans. MARVIN MINSKY: I don't see
any reason to doubt it. As far as we know computers
can simulate anything. What they can't
do yet, I suppose, is simulated large scale
quantum phenomenon, because if you know the Feynman
theory of quantum mechanics is that if you have a
network of physical systems that are connected, then
it's in the nature of physics that whatever happens
from one state to another in the real universe,
whatever happens actually happens by the wave function. The wave function
represents the sum of the activities propagating
through all possible paths. So in some sense
that's too exponential to simulate on a computer. In other words, I believe
the biggest supercomputers can simulate a helium
atom today fairly well. But they can't simulate
a lithium atom, because it's sort of four or
five layers of exponentiation. So it would be 2 to the 2 to
the 2 to the 2 and 4 to the 4 to the 4 to the 4. [INAUDIBLE] But I suspect that the
reason the brain works is that it's evolved to prevent
quantum effects from making things complicated. The great thing about a neuron
is that, generally speaking, a neuron fires all or none. And you get this point-- you have to get a full
half volt potentially between the neurons
firing [INAUDIBLE] fluid. And a half a volt is
a big [INAUDIBLE].. AUDIENCE: So you believe that
the software that we have right now is equivalent
to, for example, the intelligence that
we have like in dogs or, for example, simple animals
is like the difference that like-- do we just need to
implement the software, like multiply the software? Or so how we need to create
a whole software that-- MARVIN MINSKY: No,
there doesn't seem to be much difference
in the architecture, in the local architecture of-- AUDIENCE: Turn
your microphone on. The one in your pocket. MARVIN MINSKY: Oh, did
I turn it off again? AUDIENCE: Yes. MARVIN MINSKY: It's not green. AUDIENCE: Yeah, so
throw the switch. Is it green now? MARVIN MINSKY: Now, it's green. The difference between
the dog and the person is the huge frontal cortex. I think the rest of
it is fairly similar. And I presume the hippocampus
and amygdala and the structures that control which
parts of the cortex are used for what are
somewhat different. But the small details of the-- all mammalian brains are
practically the same. I mean, basically, you can't
make an early genetic change in how neurons work
where all the brain cells of the offspring
would be somewhat different and the thing would be dead. So evolution has this
property that generally there are only two places in the
development of an embryo that evolution can operate. Namely in the
pre-placental stage, you can change the way the
egg breaks up and evolves. And you can have amazing
things like identical twins happen without any effect on the
nature of the adult offspring. Or you can change the things
that happened most recently in evolution like
little tweaks in how some part of the
nervous system works, if it doesn't change
earlier stages, what you-- However, mutations that operate
in the middle of all that and change in the number
of segments in the embryo, I guess you could have a
longer tail or a shorter tail. And that won't effect much. But if you change the
12 segments of the spine that the brain
develops from, you'd get a huge alteration in
how that animal will think. In other words, evolution cannot
change intermediate structures very much or the
animal won't live. Bob Lawler. AUDIENCE: If one thinks of
comparing a person to a dog, would it not be most appropriate
to think of those persons who were like the wild
boy of southern France who grew up in the woods
without any language and say that if
you're going to look at individual's
intelligence that would be a fair
comparison with the dog. Whereas what we have when
we think of people today is people who have learned
so much through interaction with other people that the
transmission of culture, is not essentially
ways of thinking that have been learned
throughout the history of civilization and some of us
are able to pass on to others? MARVIN MINSKY: Oh, sure. Although if you expose
a dog to humans, he doesn't learn language. So-- AUDIENCE: He may or may
not come if you call him. MARVIN MINSKY: Right. But presumably language
is fairly recent. So you could have mutations in
the structure of the language centers and still have
a human that's alive. And it might be better at
language than most other people or somewhat worse. So we could have lots of
small mutations in anything that's been recently evolved. But the frontal cortex is-- the human cortex is
really very large compared to the
rest of the brain. Same in dolphins and a couple
of other animals, I forget, whales. yeah? AUDIENCE: So the
reason why I ask that is that it seems to me
that we have some quality, like some kind of-- we can see the world-- like add some
qualities to the world. And like this is what I
would call consciousness. And like for me, it
seems that dogs also have this quality of
like seeing the world and like adding qualities
to the world, so like maybe, this is good, this is bad. Like there are different
qualities for different beings. And like the software
that we produce right now seems to be maybe faster
and like maybe do more tests than what maybe a dog does. But for me, it doesn't seem
that it has essential display quality-- I think like it doesn't
have consciousness in the sense it doesn't like
abrogate quality to the things in the world maybe. MARVIN MINSKY: Well, I think
I know what you're getting at. But you're using that
word consciousness, which I've decided to
abandon, because it's 36 different things. And probably a dog has
5 or 6 of them or 31. I don't know. But one question
is, do you think a dog can think several
steps ahead and consider two alternative-- that's funny. Oh, let's make this abstract. So here's a world. And the dog is here. And it wants to get here. And there are all sorts
of obstacles in it. So can the dog say,
well, if I went this way I'd have such and
such difficulty, whereas if I went this way, I'd
have this difficulty. Well, I think this
one looks better. Do you think your dog considers
two or three alternatives and makes plans? I have no idea. But the curious
thing about a person is you can decide
that you're going to not act in the
situation until you've considered 16 plans. And then one part
of your brain is making these different
approaches to the problem. And another part of
your brain is saying, well, now, I've made five
plans, and I'm beginning to forget the first one. So I better reformulate it. And you're doing all of these
self-conscious in the sense that you're making plans
that involve predicting what decisions you will make. And instead of making
them, you make the decision to say I'm going to
follow out these two plans and use the result of that
to decide which one to. Do you think a dog
does any of that? Does it look around
and say, well, I could go that way or this way? Hmm. I remember our dog was
good at if you'd throw a ball it would go and get it. And if you threw two balls it
would go and get both of them. And sometimes if you
threw three balls, it would go and get them all. And sometimes if a ball
would roll under a couch that it couldn't reach, it
would get the other two, and it would think. And then it would run
back to the kitchen where that ball
is usually found. And then it would come
back disappointed. So what does that mean? Did it have parallel plans? Or does it make a new one
when the previous one fails? And they're not
actually parallel. What's your guess? How far ahead does a dog think? Do you have a dog? AUDIENCE: Yeah. I do have a dog. But I don't believe that's
the essential part of beings that have some kind
of advanced brain. Like we can plan ahead. Humans can plan ahead. But I don't think they
are the fundamental part of intelligence. Like humans, I
think Winston says that humans are better
than the primates in like they can
understand stories and they can join
together stories. But somehow I don't buy the
story that primates are just like rule planners. I think somehow we have some
quality meshing of the world and like somehow we're
not writing a software. MARVIN MINSKY: But,
you know, it's funny. Computer science
teaches us things that weren't obvious before. Like it might turn out
that if you're a computer and you only have
two registers, then-- well, in principle,
you could do anything, but that's another matter. But it might turn out that maybe
a dog has only two registers and a person has four. And a trivial thing
like that makes it possible to have two plans
and put them in suspense and think about the strategy
and come back and change one. Whereas if you only
had two registers, your mind would be
much lower order. And there's no big difference. So computer science tells us
that the usual way of thinking about abilities might be wrong. Before computer science,
people didn't really have that kind of idea. Many years ago, I
was in a contest-- I mean, you know, a science,
because some of our friends showed that you could make a
universal computer with four registers. And I had discovered
some other things, and I managed to show that you
could make a universal computer with just two registers. And that was a big surprise
to a lot of people. But there never was
anything in the history of psychology of that nature. So there never were really
technical theories of-- it's really
computational complexity. What does it take to solve
certain kinds of problems? And until the 1960s, there
weren't any theories of that. And I'm not sure that that
aspect of computer sciences actually reach
many psychologists or neuroscientists. I'm not even sure
that it's relevant. But it's really interesting
that the difference between 2 and 3 registers could
make an exponential difference in how fast you could solve
certain kinds of problems and not others. So maybe there'll be a little
more mathematical psychology in the next couple of decades. Yeah. AUDIENCE: So in
artificial intelligence, how much of our effort
should be devoted to a kind of reflecting
on our thinking as humans and trying to figure out
what's really going on inside our brains and
trying to kind of implement that versus observing
and identifying what kinds of problem we, as
humans, can solve and then come up with an intuitive
way for a computer to kind of in a human-like
way solve these problems? MARVIN MINSKY: They're
a lot of nice questions. I don't think it
doesn't make any sense to suggest that we think about
what's happening in our brains, because that takes
scientific instruments. But it certainly
makes sense to go over older theories of
psychology and ask to solve a certain
kind of problem, what kind of procedures
are absolutely necessary? And you could find some
things like that, like how many registers would you need
and what kinds of conditionals and what kind of addressing. So I think a lot of cognitive
psychology, modern cognitive psychology, is of
that character. But I don't see any way
to introspect well enough to guess how your
brain does something, because we're just
not that conscious. You don't have access to-- you could think for
10 years about how do I think of the next
word to speak, and unlikely that you would-- you might get some
new ideas about how this might have happened,
but you couldn't be sure. Well, I take it back. You can probably get
some correct theories by being lucky and clever. And then you'd have to
find a neuroscientist to design an experiment
to see if there's any evidence for that. In particular, I'd
like to convince some neurologists to
consider the idea of k-lines. It's described I think
in both of my books. And think of experiments to see
if you could get them to light up or otherwise localize in-- once you have in your mind the
idea that maybe the way one brain connects-- sends information to another
is over something like k-lines, which I think I talked
about that the other day-- random superimposed
coding on parallel wires, then maybe you could
think of experiments that even present brain
scanning techniques could use to localize these. My main concern is that the
way they do brain scanning now is to set thresholds to see
which brain centers light up and which turn off. And then they say, oh, I
see this activity looks like it happens in the
lateral hippocampus because you see that light up. I think that there should
be at least a couple of neuroscientist groups
who do the opposite, which is to reduce the contrast. And when there are
several brain centers that seem to be involved
in an activity, then say something to the
patient and look for one area to get 2% dimmer and
another to look 4% brighter and say that might
mean that there's a k-line going from
this one to that one with an inhibitory
effect on this or that. But as far as I know
right now, every paper I've ever seen published showing
brain centers lighting up has high contrast. And so they're missing
all the small things. And maybe they're only seeing
the end result of the process where a little thinking has
gone on with all these intricate low intensity interactions,
and then the thing decides, oh, OK, I'm
going to do this. And you conclude that that
brain center which lit up is the one that
decided to do this, whereas it's the result of a
very small, fast avalanche. AUDIENCE: Have you seen
the one a couple of weeks ago about reading out
the visual in real time? MARVIN MINSKY: From
the visual cortex? AUDIENCE: Yes. Quite a nice half,
they aren't actually reading out the visual field. For each subject, they do a
massive amount of training where they flash thousands
of 1-second video clips and assemble a database of
very small perturbations in different parts of the
visual cortex lighting up. And they show a novel video
to each of the subjects and basically just do
a linear combination of all of the videos
that they have done in the training phase
weighted by how closely things line up in the brain. And you can sort of
see what's going on. It's quite striking. MARVIN MINSKY: Can you
tell what they're thinking? AUDIENCE: You can only
tell what they're seeing. But I think-- MARVIN MINSKY: You know,
if your eyes are closed, your primary visual cortex
probably doesn't do anything, does it? AUDIENCE: I think it's just-- yeah. MARVIN MINSKY: But
the secondary one might be representing
things that might be. AUDIENCE: Yes. So the goal of the
authors of this paper is eventually to literally
make movies out of dreams. But that's a long way off. MARVIN MINSKY: It's an old
idea in science fiction. How many of you read
science fiction? Wow, that's a majority. Who's the best new writer? AUDIENCE: Neal Stephenson. MARVIN MINSKY: He's been
writing a long time. AUDIENCE: He's new
compared to Heinlein. [LAUGHTER] MARVIN MINSKY: I had
dinner with Stephenson at the Hillis's a
couple of years ago. Yeah? AUDIENCE: So from
what I understood, it seems that you're saying
that the difference between us and like, for example, dogs
is just a computational power. So do you believe
that the difference between dogs and computers
is also just computational? Like what's the difference
between dogs and like Turing machine? Or there is no difference? MARVIN MINSKY: It might be
that only humans and maybe some of their closest relatives
can imagine a sequence. In other words, the simplest and
oldest theories in psychology were the theories
like David Hume had the idea of association, one
idea in the mind or brain causes another idea
to appear in another. So that means that a brain
that's learned associations or learn if/then
rule-based systems can make chains of things. But the question is, can any
animal, other than humans, imagine two different situations
and then compare them and say, if I did this and then that,
how would the result differ from doing that and then this? If you look at Gerry
Sussman's thesis-- if you're at MIT,
a good thing to do and you're taking
your course, you should read the PhD
thesis of your professor. It not only will
help you understand better what the
professor said, you'll get a higher grade, if you
care, and many other advantages. Like you'll actually
be able to talk to him and his mind won't throw up. So, you know, I don't know if
a dog can recapitulate as-- can the dog think, I think
I'll go around this fence and when I get to this tree
I'll do this, I'll pee on it-- that's what dogs do-- whereas if I go this way
something else will happen? It might be that you
that pre-primates can't do much of that. On the other hand, if you ask,
what is the song of the whale? What's the whale that
has this 20-minute song? My conjecture is
that a whale has to swim 1,000 miles or several
hundred miles sometimes to get the food it wants
because things change. And each group of whales-- humpback whales, I
guess, sing this song that's about 20 minutes long. And nobody has made
a good conjecture about what the
content of that song, but it's shared
among the animals. And they can hear it 20 or
50 miles away and repeat it. And it changes every season. So I suspect that the obvious
thing that it should be about is where's the food
these days, where are the best flocks
of fish to eat, because a whale can't afford
to swim 200 miles to the place where its favorite fish were
last year and find it empty. It takes a lot of energy
to cross the ocean. So maybe those animals
have the ability to remember very long
sequences and even some semantics
connected with it. I don't know if dogs
have anything like that. Do dogs ever seem to be
talking to each other? Or they just-- AUDIENCE: I have a story dogs. So apparently in
Moscow, not all dogs, but a very small fraction of
the stray dogs in the city have learned how
to ride the metro. They live out in the
suburbs because I guess people give them less
trouble when they're out in the suburbs. And then they take
the subway each day into the city center where
there are more people. And they have various strategies
for begging in the city center. So for instance, they find
some guy with a sandwich, and they bark really
loudly behind the guy, and the guy would
drop the sandwich. And then they would steal it. Or they have a pack of them,
and they all know each other. And they send out a really
cute one to beg for food, and so they'll give
the cute one food. And the cute one brings
it back to everyone else. And simply navigating the subway
is actually a bit complicated for a dog, but somehow a very
small group of a dogs in Moscow have learned how to do it, like
figure out where their stop is, get on, get off. MARVIN MINSKY: Yeah, our dog
once hopped on the Green Line and got off at Park Street. So she was missing for a while. And somebody at Park
Street called up and said your dog is here. So I went down and got her. And the agent said,
you know, we had a dog that came to Park Street
every day and changed trains and took the Red
Line to somewhere. And finally, we found
out that its master had-- it used to go to work with its
owner every day, and he died. And the dog took the
same trip every day and. The T people understood that
he shouldn't be bothered with. Our dog chased cars. Was it Jenny? And that was terrible because we
knew she was going to get hurt. And finally, a car
squashed her leg, and she was laid up for a while
with a somewhat broken leg. And I thought, well, she
won't chase cars anymore. But she did. But what she wouldn't do is go
to the intersection of Carlton and Ivy Street
anymore, which is-- so she had learned something. But it wasn't the right thing. I'm not sure I answered your-- AUDIENCE: Actually,
according to-- there's this story that
you gave in Chapter 2 about the girl who
was digging dirt. So in the case where she
learns whether in digging dirt is a good or bad
activity is when there is somebody with whom
she had an attachment bond present who's telling her
whether it's good or bad. And in the case where
she learned to avoid that fight is when something
bad happens to her in the spot. So in a sense, the dog is
behaving just like that logic. MARVIN MINSKY: Yes. Except that the dog is oriented
toward location rather than something else. So-- AUDIENCE: Professor, can you
talk about possible hierarchy or representations
schemes of knowledge, like semantic is on top. And at the bottom,
there's like-- you're mentioning in
the middle of k-lines they were on the bottom. There's things up there. So the way I thought about
the present therapist asked that humans-- it's just natural
that you need all of the immediate representation
in order to support something like semantic nets. And it seems natural
to me to think that humans have all
these double hierarchy of representations,
but dogs might have something
only in the middle, like they only have something
like neuronets or something. So my question
is, what behaviors that you could observe
in real life could only be done with one of these
intermediate representations of knowledge that can't be done
with something like machine learning? MARVIN MINSKY: Hmm, you
mean machine learning of some particular kind? AUDIENCE: That's currently
fashionable I think. Kind of like with brute force of
calibration of some parameter. It seems to me that if you
recognize a behavior like that, it might be a worthy
intermediate goal to be able to model that instead
of trying to model something like natural
language, which is you might need the first part
to get the second part. MARVIN MINSKY: Well, it
would be nice to know-- I wonder how much is known
about elephants, which are awfully smart compared to-- I suspect that they are
very good at making plans, because it's so
easy for an elephant to make a fatal mistake. So unfortunately,
probably no research group has enough budget to
study that kind of animal, because it's just too expensive. How smart are elephants? Anybody-- I've never
interacted with one. I'm not sure if you
have a question. AUDIENCE: I think the question
is are there behaviors that you need an intermediate
level of the repetition of knowledge in order
to perform that you don't need like the highest
level like semantic-- like basically natural
language to do. So you could say that by some
animal doing this behavior, I know that it has
some intermediate level of representation
of knowledge that's more than kind of a brute force
machine learning approach. Because like what's
discussed before, a computer can do
path finding, which is like a brute force approach. I don't think that's how
humans do it or animals do it. MARVIN MINSKY: I can't
think of a good-- it's just hard to think of
any animals besides us that have really elaborate
semantic networks. There's Koko, who is a
gorilla that apparently had hundreds of words. But-- AUDIENCE: I think
the question is to find something that's lower
than words, like maybe Betty the crow-- MARVIN MINSKY: With
that stick, yeah. How many of you
seen the crow movie? She has a wire that she
bends and pulls something out of a tube. But-- AUDIENCE: I don't think
machine learning can do that. But I don't think you
need semantic nets either. MARVIN MINSKY: I have
a parrot who lives in a three-dimensional cage. And she knows how to get
from any place to another. And if she's in a hurry,
she'll find a new way at the risk of injuring
a wing, because there are a lot of sticks in the way. So flying is risky. Our daughter, Julie, once
visited Koko, the gorilla. And she was introduced-- Koko's in a cage. And Penny, who is
Koko's owner, introduces Julie in sign language. It's not spoken. It's sign language. So Julie gets some name. And she's introduced to Koko. And Koko likes Julie. So Koko says, let me out. And Penny says, no,
you can't get out. And Koko says,
then let Julie in. And I thought that showed
some fairly abstract reasoning or representation. And Penny didn't let Julie in. But Koko seemed to have a fair
amount of declarative syntax. I don't know if she could do
passives or anything like that. If you're interested,
you probably can look it up on the web. Penny's owner-- I mean
Penny thought that Koko knew 600 or 700 words. And a friend of ours was a
teenager who worked for her. And what's his name? And he was convinced that Koko
knew more than 1,000 words. But he said, you
see, I'm a teenager and I'm still good at picking
up gestures and clues better than the adults here. But anyway I gather
Koko is still there. And I don't know if she's
still learning more words. But every now and then
we get a letter asking to send more money. Oh, in the last
lecture, I couldn't think of the right crypto
arithmetic example. I think that's the one that the
Newell Simon book starts out with. So obviously, m is 1. And then I bet some of
you could figure that out in 4 or 5 minutes. Anybody figured it out yet? Help. Send more questions. Yeah? AUDIENCE: I have an example. For instance, I go
out to a restaurant of this type of exotic food
that I've never ever had before. And I end up getting
sick from it. So what determines what
I learned from this? Because there are many
different possibilities. There is the one
possibility of I learned to avoid the
specific food I ate. Another possibility
is like I learn to avoid that type
of food, because it might contain some sort of
spice that I react to badly. And a third possibility--
there might be more-- I learn to avoid
that restaurant, because it just might
be a bad restaurant. So in this case, it's
not entirely clear which one to pick. And, of course, in
real life, I might go there again and
comparatively try another food or try the same food at
a different restaurant. But what do you think about this
on that scenario, what causes people to pick which one? MARVIN MINSKY: The trouble is
we keep thinking of ourselves as people. And what you really
should think of yourself as a sort of Petri dish with
a trillion bacteria in it. And it's really not
important to you what you eat, but your
intestinal bacteria are the ones who are
really going to suffer, because they're not
used to anything new. So I don't know what
conclusion to draw from that. But-- AUDIENCE: Previously,
you mentioned that David Hume
thought that knowledge represented as associations. And that occurs to me as
being some sort of like a Wiki structure where
entries have tags. So an entry might be
defined by what tags it has and what associations it has. I'm wondering if
that structure has been-- if somebody has
attempted to code that into some kind of
peripheral structure, has there been any
success with putting that idea into a potential AI. MARVIN MINSKY: I don't
know how to answer that. Do any psychologists
use semantic networks as representations? Pat, do you know, has anybody-- is anyone building an AI system
with semantic representations or semantic networks anymore? Or is it all-- everything I've seen
is gone probabilistic in the last few years. Your project. Do you have any competitors? AUDIENCE: No. MARVIN MINSKY: Any idea what
the IBM people are using? I saw a long article that
I didn't read, yet but-- AUDIENCE: Traditional
information retrieval plus 100 hacks plus
machine learning. MARVIN MINSKY: They
seem to have a whole lot of slightly different
representations that they switch among. AUDIENCE: But none of
them are very semantic. AUDIENCE: Well,
they probably have-- I don't know, does anybody
know what the answer is? But they must have a
little frame-like things for the standard questions. MARVIN MINSKY: Of course, the
thing doesn't answer any-- it doesn't do any reasoning
as far as you can tell. AUDIENCE: Right. MARVIN MINSKY: So it's
trying to match sentences in the database
with the question. Well, what's your
theory of why there aren't other groups working on
what we used to and you are? AUDIENCE: Well, multiples
are computing is a fad. And if you can do better
in less time that way than figuring it out
how it really works, then that's what you do. No one does research
on chess, no one does a research on how
humans might play chess, because the bulldozer
computers have won. MARVIN MINSKY: Right. There were some articles
on chess and checkers early in the game. But nothing recent
as far as I know. AUDIENCE: So in many ways it's
a local maximum phenomenon. So bulldozer computing
stuff has got up to a certain local maximum. Until you can do better than
that some other way, then [INAUDIBLE] MARVIN MINSKY:
Well, I wonder if we could invent a new TV show where
the questions are interesting. Like I'm obsessed
with the question of why you can pull
something with a string, but you can't push it. And, in fact, what
was this-- we had a student who actually did
something with that a long time ago. But I've lost track of him. But how could you
make a TV show that had common sense questions
rather than ones about sports and actors? AUDIENCE: Well, you
don't you imagine what happens when you push a string? It's hard to explain the-- MARVIN MINSKY: It buckles. AUDIENCE: It's easy to imagine. MARVIN MINSKY: Yeah,
So you can simulate it. AUDIENCE: Yeah. MARVIN MINSKY: Yeah. AUDIENCE: I have a question. So suppose in the
future we can create a robot as intelligent
as human as smart, and how we should evaluate it? When do we know that we reach
like certain things like which test should pass or which
[INAUDIBLE] should [INAUDIBLE]?? So for example, [INAUDIBLE]
asked some pretty hard questions and seem
to be intelligent. But what all it is doing is
doing some other attempts and then calculating some
probability and stuff. Humans don't do that. They try to understand the
question and look to answer it. But then suppose you
can create a robot that can behave as it is like-- I don't know, how
would you evaluate when do you know that
you reach something? MARVIN MINSKY:
That's sort of funny, because if it's any good, you
wouldn't have that question. You'd say, well,
what can't it do? And why not? And you'd argue with it. In other words, people talk
about passing the Turing test, or whatever. And it's hard to imagine a
machine that you converse with for a while and then when
you're told it's a machine, you're surprised. AUDIENCE: So I
think, for example, you can make a machine to say
some very intelligent and smart things, because
like it may know, it takes all this information
from different books and all this information that
it has somewhere in a database, right. But then like when people
speak they kind of dissent when you're speaking. How do you know like some
robot understands something or doesn't understand? Or does it have to
understand at all? MARVIN MINSKY: Well, I would ask
it questions like why can't you push something with a string? Anyone have a Google working? What does Google say
if you ask it that? Maybe it'll quote me. Or someone-- yeah? AUDIENCE: How would you
answer that question, like why can pull, but not break? MARVIN MINSKY: I'd say,
well, it would buckle. And then they would say,
what do you mean by buckle? And then I'd say,
oh, it would fold up so that it got shorter without
exerting any force at the end. Or blah, blah. I don't know. There are lots of answers. How would you answer it? A physicist might say, if
you've got it really very, very, very straight, you could
push it with a string. But quantum mechanics
would say you can't. Yeah. AUDIENCE: I feel like if you-- like the [INAUDIBLE] or
like an interesting show would be like an
alternate cooking show or something
where you have to use object that's like not normally
found to have that use. So like I want to paint a room,
but you're not given a brush. You're given like a sponge. Or people pull up like eggplants
want it painted purple. So it has to represent the thing
in a different way other than-- MARVIN MINSKY: Words. That's interesting. When I was in graduate school,
I took a course in knot theory. And, in fact, you
couldn't talk about them. And if anybody had
a question, they'd have to run up to the board. And, you know, they'd have
to do something like this. Is that a knot? No. No, that's just a loop. But if you were
restricted to words, it would take a half hour to-- that's interesting. Yeah? AUDIENCE: You mentioned solving
the strange puzzle by imagining the result. And I think
heard someone else say, computers can do
that in some way. It can simulate a string. And we know enough
physics that you can give a reasonable
approximation of string. But I find that the question
that is often not asked in AI is-- or by computers-- is how does
one choose the correct model with which to answer questions? There's a lot of questions
we're really good at answering with computers. And some of them, we have
genetic algorithms they're good for, some of them based in
statistics, some of them formal logic, some of
them basic simulation. But this is all-- to me this is the core
question, because this is what people
decide, and no one seems to have ever
tackled an [INAUDIBLE].. MARVIN MINSKY:
Well, for instance, if somebody asks
the question, you have to make up a
biography of that person. So because the same question
from different people would get really
different answers. Why does a kettle make a
noise when the water boils? If you know that the other
person is a physicist, and it's easy to think
of things to say, but-- it's not a very good example. What's the context of that? In a human conversation,
how does each person know what to say next? AUDIENCE: I guess
one question is, how do people decide
what evidence to use to tackle a problem? And I guess, the more
fundamental question is, when people are
solving problems, how do they decide
how they're going to think about the problem? Are they going to think
about it by visualizing it? Think about it by
trying to [INAUDIBLE] Think about it by
analogy or formal logic? Of all the tools we have, why
do we pick the ones we do? MARVIN MINSKY: Yeah,
well, that goes back to if you make a list
of the 15 most common ways to think and somebody asks
you a question or asks, why does such and such happen,
how do you decide which of your ways to think about it? And I suspect that's
another knowledge base. So we have commonsense
knowledge about, you know, if you let go of
an object, it will fall. And then we have more
general knowledge about what happens
when an object falls. Why didn't it break? Well, it actually did. Because here's a little white
thing, which turned into dust. And so that's why I think
you need to have five or six or how many different
levels of representation. So as soon as somebody
asks a question, one part of your brain is
coming up with your first idea. Another part of your
brain is saying, is this a question about
physics or philosophy or is it a social question? Did this person ask it because
they actually want to know or they want to trap me? So I think you-- generally this idea of this-- there must be many kinds
of society of mind models that people have. And each person, whenever
you're talking to somebody, you choose some model of what
is this conversation about? Am I trying to accomplish
something by this discussion? Is it really an
interesting question? Do I not want to
offend the person or do I want to make
him go away forever? And little parts of your brain
are making all these decisions for you. I'd like to introduce Bob
Lawler, who's visiting. AUDIENCE: One of my favorite
stories about Feynman, it comes from asking
him to dinner one night. And I asked him how
he got to be so smart. And he said that when he
was an undergraduate here, he would consider every time
he was able to solve a problem, just the beginning step
of how to exploit that. And what he would
then do would be to try to reformulate
the problem in as many different
representations as he could. And then use his solution of
the first problem as a guide in working out alternate
representations and procedures in that. The consequence
according to him was that he became very
good at knowing which was the most fit
representation to use in solving any particular
problem that he encountered. And he said that that's where
his legendary capability in being so quick with good
solutions and good methods for solutions came from. So maybe a criteria for
an intelligent machine will be one that
had a number of-- 15 different ways of thinking
and applied them regularly to develop alternative
information about different methods
of problem solving. You would expect it then to
have some facility at choosing based on its experience. MARVIN MINSKY: Yeah, he
wrote something about-- because the other
physicists would argue about whether to
use Heisenberg matrices or Schrodinger's equation. And he thought he
was the only one who knew how to solve each problem
both ways, because most of the other
physicists would get very good at one or the other. He had another feature which
was that if you argued with him, sometimes he would say, oh,
you're right, I was wrong. Like he was once
arguing with Fredkin about could you have clocks
all over the universe that were synchronized. And the standard idea is you
couldn't because of relativity. And Fredkin said, well,
suppose you start out on Earth and you send a huge army of
little bacteria-sized clocks and send them through all
possible routes to every place and figure out and compensate
for all the accelerations they had experienced on the path. Then wouldn't you get a
synchronous time everywhere? And Feynman said, you're
right, I was wrong-- without blinking. He may have been wrong, but-- More questions? AUDIENCE: Along the same
line as his question about how do we know what method
to use for solving problems. Kind of curious how
we know what data set or what data to use
when solving a problem. Because we have so much
sensory information at any moment and so much
data we have from experience. But like when you get a
problem, you instantly-- and I guess k-line is sort
of a solution for that. But I'd be curious how you could
possibly represent good data relationships in a way that a
computer might be able to use. Because like right
now, the problem is that we always have
to very narrowly define a problem for a machine
to be able to solve it. But I feel like if
we could come up with good methods for
filtering massive data sets to justify what might be
relevant that doesn't involve like trial and error. MARVIN MINSKY: Yes,
so the thing must be that if you have a problem,
how do you characterize it? How do you think, what
kind of problem is this and what method is good
for that kind of problem? So I suppose that
people vary a lot. And it's a great question. That's what the critics do. They say what kind
of problem is this? How do I recognize this
particular predicament? And I wish there were
some psychologists who thought about that the way
Newell and Simon did, god, in the 1960s. That's 50 years ago. How many of you
have seen that book called Human Problem Solving. It's a big, thick book. And it's got all
sorts of chapters. That's the one I mentioned the
other day where they actually had some theories
of human problem solving and simulated this. They gave subjects problems
like this and said, we want you to figure out
what numbers those are. And they lied to the
subjects and said, this is an important kind
of problem in cryptography. The secret agents
need to know how to decode cryptograms of
this sort, where usually it's the other way around. The numbers stand for letters. And there's some
complicated coding. But these are simple cases. So you have to figure
out that sort of thing. And then the book
has various chapters on theories of how you recognize
different kinds of problems and select strategies. And, of course, some people
are better than others. And believe it or not, at MIT
there was almost a whole decade of psychologists here who
were studying the psychology of 5-person groups. Suppose you take five people
and put them in a room and give them problems like
this, or not the same cryptic, but little puzzles that require
some cleverness to solve. And you record and video. They didn't have
video in those days. So it was actual film. And there's a whole
generation of publications about the social and cognitive
behavior of these little groups of people. They zeroed in on 5-person
groups for reasons I don't remember. But it turned out that
almost always when you had the group divided into
two competitive groups with two and three, every now and
then they would reorganize. But it was more a study
in social relations than in cognitive psychology. But it's an interesting book. There must be contemporary
studies like that of how people cooperate. But I just haven't been
in that environment. Any of you taken a
psychology course recently? Not a one? Just wonder what's happened
to general psychology. I used to sit in on Tauber and a
couple of other lecturers here. And psychology, of
course, was sort of like 20% optical illusions. AUDIENCE: Yeah,
they still do that-- MARVIN MINSKY: Stuff like that. AUDIENCE: They also
concentrate a lot on development psychology. MARVIN MINSKY: Well,
that's nice to hear, because I don't
believe there was any of that in Tauber's class AUDIENCE: I think Professor
Gabrieli now teaches the introductory psychology. And he-- MARVIN MINSKY: Do they
still believe Piaget or do they think that he was wrong? AUDIENCE: I think they
probably take the same approach as with like Freud,
they would say great ideas and a revolution,
but they also don't think he's the end of the-- MARVIN MINSKY: Well, he got-- AUDIENCE: I know the
childhood development class, you read Piaget, his books. MARVIN MINSKY: Yeah. In Piaget later
years, he got algebra. And he wanted to be more
scientific and studied logic and few things like that
and became less scientific. It was sort of sad to-- I can imagine being
browbeaten by mathematicians, because they're the ones
who were getting published. And he only had-- how
many books did Piaget-- AUDIENCE: But if I may add
a comment about Piaget. It really comes from an old
friend of many of us, Seymour. As you know, he was, of
course, Piaget's mathematician for many years. MARVIN MINSKY: We got
people from Piaget's lab. AUDIENCE: But Seymour said
that he felt that Piaget's best work was his early
work, especially like building his case studies. And one time when
we were talking about the issue of
focusing from the AI lab and worked on in
psychology here, Seymour said he felt that was
less than necessary than more of a concentration on
AI, because he expected in the future the world
of study of the mind would separate into two
individual studies, one much more biological, like the
neurosciences of today, and the other focus more on
the structure of knowledge and on representations
and in effect the genetic
epistemology of Piaget. Then he added that
something was a quote later. And it was, "Even if Piaget's
marvelous theory today proved to be wrong, he was
sure that whatever replaced it would be a theory
that the same sort, one of the development of
knowledge in all its changes." So I don't think people will get
away from Piaget however much they want. MARVIN MINSKY: I
don't think so either. I meant to introduce our visitor
here, because Bob Lawler here has reproduced a good many
of the kinds of studies that Piaget did in
the 1930s and '40s. And if you look
him up on the web-- you must have a few papers. AUDIENCE: I better tell you
what the website is, because it still hidden from web prose. It's nlcsa.net. MARVIN MINSKY: That
would be hard to-- AUDIENCE: Natural Learning
Case Study Archive dot net. It's still in process,
still in development. But it's worth looking at. MARVIN MINSKY: How many
children did Piaget have? AUDIENCE: Well, Piaget
had three children-- MARVIN MINSKY: So did you-- AUDIENCE: Not in his study. But what he did
was to mix together the information from
all three studies and supported the ideas
with which he began. So it was illustrations
of his theories. MARVIN MINSKY: Anyway, Bob,
has quite a lot of studies about how his children developed
concepts of number and geometry and things like that. And I don't know of
anyone else since Piaget who has continued to do
those sorts of experiments. There were quite a lot at
Piaget's institute in Geneva for some years after
Piaget was gone. But I think it's pretty
much closed now, isn't it? AUDIENCE: Well, the
last psychologist Piaget hired Jacques Benesch, who was
no longer at the university. He retired. And it has been taken over
by the neo-Piagetians, who are doing something different. MARVIN MINSKY: Is
there any other place? Well, there was Yoichi's
lab on children in Japan. AUDIENCE: There are many
people to take Piaget seriously in this country and others. AUDIENCE: So Robert
mentioned that Feynman had more representations of the
world than like usual people. Like when I talked about
Eisen and the glial cells, I referred to that
because I believe that k-lines is our way
of representing the world. And maybe Eisen had better
ways of representing the world. And I believe that, for
example, agents as resources are not different
from Turing machines. You can create a
very simple Turing machine that act
like agents, and you have some mental states. But there is no, I believe, good
way of representing the world and updating the
representation of the world. Like it seems to me
that when you grow up, you are learning
how to represent the world better and better. And you have some layers. And that's all k-lines. And if glial cells are
actually related to k-lines, it means that Eisen had
like a better hardware representing the world. And that's why he would be
smarter than other people. MARVIN MINSKY:
Well, it's hard to-- I'm sure that that's
right that you have a certain
amount of hardware, but you can
reconfigure some of it. Nobody really knows. But some brain centers may
have only a few neurons. And maybe there's some
retrograde signals. So that if two brain centers
are simultaneously activated, then usually the signals
only go one wave, from one to the other. Have to go through a
third one to get back. But it could be that the brain-- that the neurons have property
that if two centers are activated, maybe that
causes more connections to be made between them that
can then be programmed more. I don't think anybody really
has a clear idea of whether you can grow new connections
between brain centers that are far apart. Does anybody know? Is there anything-- AUDIENCE: It used to
be common knowledge that there was no such
thing as adult neurogenesis. And now it is known
that it exists in certain limited
regions of the brain. So in the future,
it may be known that it exists everywhere. MARVIN MINSKY: Right. Or else that those
experiments were wrong. And they were in a frog
rather than a person. AUDIENCE: Lettvin
claimed that you could take a frog's brain
out and stick it in backwards and pretty soon it would
behave just like it used to. MARVIN MINSKY: Lettvin said? AUDIENCE: Yeah. Of course. I don't know if he
was kidding or not. You never could tell. MARVIN MINSKY: You could never
tell when he was kidding. Lettvin was a
neuroscientist here who was sort of one of the
great all time neuroscientists. He was also one of
the first scientists to use transistors for
biological purposes and made circuits that are
still used in every laboratory. So he was a very
colorful figure. And everyone should read
some of his older papers. I don't know that there
were any recent ones. But he had an army of students. And he was extremely funny. What else? AUDIENCE: So continuing
on the idea of hardware versus software, what do
you think about the idea that intelligence or humans may
need strong instincts as when they're born in
order like-- hence the interplay between
their instincts, like they know to cry
when they're hungry or to look for their mother. They need these
instincts in order to develop higher
orders of knowledge. MARVIN MINSKY: You'd to
ask L Ron Hubbard for-- I don't recall any
real attempts to-- I don't think I've ever
run across anybody claiming to have correlations
between prenatal experience and the development
of intelligence. AUDIENCE: That's not
what I'm talking about. I'm talking about
before intelligence is being developed,
like you learn language, before you learn
language, you need to have a motivation
to do something. So you need to have instincts,
instinctual reactions to things. Like traditional
experience with knowledge after you're born, you-- MARVIN MINSKY: Well,
children learn language, you know, 12 to 18 months. What are you saying that
they need some preparation? I'm not sure what you're asking. AUDIENCE: So think of it from
an engineering point of view. If you were to build like a
robot, what you need to program is some instincts,
some like rule of thumb algorithms in order to get
it started in the world in order to build
experiential knowledge. MARVIN MINSKY: You might
want to build something like a difference engine, so
that you can represent a goal and it will try to achieve it. So you need some engine for
producing any behavior at all. AUDIENCE: Right. So like if you take the approach
that like maybe to build an AI, you should build
like an infant robot and then you teach it as you
would like a human child. Then would it be
useful to make it dependent on like
some other figure in order to help it learn how
to do things like a human child would? MARVIN MINSKY: Well,
in order to learn, you have to learn
from something. And one way to learn is in
isolation, just to have some-- you could build a goal to
predict what will happen. And the best way to predict,
as Alan Kay put it once, the best way to predict
the future is to invent it. So you could make a-- or could put a model of an adult
in it to start with, so that-- in other words, one way
to make a very smart child is to copy its mother's brain
into a little sub-brain when it's born. And then it could
learn from that instead of depending on anybody else. I'm not sure-- you have
to start with something. Of course, humans, as Bob
mentioned or someone mentioned, if you take a human
baby and isolate it, it looks like it won't develop
language by itself, because-- I don't know what because. In fact, I remember
one of our children who was just learning to talk. And something came up, and she
said, what because is that. Do you remember? It took a while to
get her to say why. She would come up
and say what because. And I would say, you're
asking why did this. After a long time
she got the hint. But-- why do all w-h
words start with w-h? AUDIENCE: One of them doesn't-- how. MARVIN MINSKY:
Could you say whow? How. Is there a theory? AUDIENCE: Not that I know of. MARVIN MINSKY:
It's a basic sound telling you're making
a query before you can do the rising inflection. It's interesting. Is it true in French? Quoi? The land of the silent letter. Anybody know what's the
equivalent of w-h words in your native language? AUDIENCE: N. MARVIN MINSKY: What? AUDIENCE: N. MARVIN MINSKY: N? AUDIENCE: Yeah. MARVIN MINSKY: In what? AUDIENCE: Turkish. MARVIN MINSKY: Really? They all start with n? Wow. Interesting. Maybe the infants have
an effect on something. Do questions in Turkish
end with a rise? AUDIENCE: Yeah. So only the relevant
w-h questions-- OK, all questions end in
kind of an inflection. But normally, you have a
little kind of little word that you would put at
the end of any sentence to make it into a question,
except for the w-h questions, which are standalone one. You don't them. MARVIN MINSKY: Yes, you'd
say, this is expensive? They don't need the w-h
if you do enough of that. Huh. So question, is that
in the brain at birth? AUDIENCE: Is that pattern
mirrored in English where you can say, is this expensive? But if you can say
how expensive is this without that rising intonation. It mirrors using
the separate word, but you don't need that separate
word if it's an end word. AUDIENCE: But if you're
saying how expensive is this without the
question inflection, it almost sounds
like you're making a statement about just how
ridiculously expensive it is. Like you're going,
how expensive is this versus how expensive is this? MARVIN MINSKY: Well,
I should let you go.