INTERVIEWER: Marvin Minsky,
when did the idea of artificial intelligence present
itself to you as a young person growing up? MINSKY: I think the first time
I started to think about that was when I was an undergraduate
at Harvard and I was looking through Widener
Library and I ran across a big thick book called 'Mathematical
Biophysics'. And now we're talking the late
1940's and there weren't such words around. I opened it and it was full of
strange little articles edited by a prodigious guy, Nicolas
Rashevsky, at the University of Illinois, I think,
maybe Chicago. And it had chapters about
theories of how cells might divide and how populations
grow. And maybe 40 or 50
little chapters. And one of the chapters was
about simulated neural networks by McCulloch
and Pitts in 1941. And I had been curious about
psychology because-- that's a long story, but I was trying to
decide what to do and one thing that seemed interesting
to do was mathematics. But there were other people who
were good at mathematics, very good, and in mathematics
there's no point in being second best because it's
different from other fields. And I was interested in biology
and there seemed to be people pretty good at that. And chemistry, I had a
professor, Louis Fieser, and it looked like that
was under control. And then there was psychology
and as far as I could see there wasn't anyone good at
that except maybe Sigmund Freud 50 years before. But what to do about it because
people didn't seem to have any theories of how
thinking worked. And here was this strange paper
with ideas that in fact were completely new about finite
state machines and things like that. And I got very excited. INTERVIEWER: Is it fair to say
that it that moment you glimpsed an enormous field that
would eventually open up and become an almost uncountable
number of disciplines. MINSKY: I don't think I saw
where this field could go, but I had been reading psychology
and I did see that nobody had a theory, for example, about
how learning works. They had philosophical theories:
well you have ideas and somehow they get connected
and when something happens you make a new idea and you put it
somewhere in your mind and later you fish it out. And no one had any good theory--
excuse me-- there didn't seem to be any theories
of how this knowledge could be represented or retrieved or how
you could rub two of them together and get a third one. And I could see in
McCulloch-Pitts ideas, which were just little switches
connected to each other-- that there was a way that perhaps
information or knowledge could be represented. And the paper was
in three parts. And I couldn't understand
the third one. And after a long time I decided
that whatever it was, it must be wrong. And that's very important
because that gives you something to do. And I worked on various ways
to fix it and I couldn't. It finally got fixed much
later in 1956 by a mathematician named Stephen
Kleene who also read the paper and said this doesn't seem right
and he knew exactly what to do about it. And incidentally, his theory
of how to represent finite state machines in switches and
so forth was exactly the same as another theory that
two professors-- I'm trying to remember their
names-- at MIT had made for calculating the impedance of a complicated electrical circuit. And I don't know if anyone had
noticed that but these two theories-- one is called Regular
Expressions by Stephen Kleene and his theory was so
nice and elegant that it's used in all search engines today
almost exactly the form that he invented. And the Mason and Zimmerman-- I think were the professors--
and the signal flow graphs are used everywhere for calculating
how electrical circuits would work. But those are 10 years apart
in my experience. Interesting to see two theories
in completely different fields that are
exactly the same. INTERVIEWER: In your childhood
growing up-- let's just step back for a bit-- what encouraged
you to think that you were capable solving
problems or at least seeing insights into existing problems
that no one else saw? What kinds of experiences
did you have as a child? MINSKY: I think when I was
a child I didn't have the feeling that I could solve
problems that other people could solve. On the contrary, I found things
were quite difficult. And when I tried to read
mathematics it would take an hour a page and I'd get some of
the ideas but not others. And usually it would be six
months later that suddenly it would click. And so I think I thought of
myself as sort of slow. On the other hand, I thought
of everyone else as incredibly slow. But I didn't think of myself
as particularly creative. And it's just-- but I never
grew up in some sense. And as far as I can tell, I've
been getting better at things slowly and steadily. It was only when I was older
that I noticed that most people work on something, they
do something wonderful, and then they get stuck. And I started to make theories
of why do people get stuck and how to avoid it. And the best thing is if you've
done something, you should be ashamed of it instead
of proud of it. And I notice a lot of people
keep saying, well I thought of that a long time ago and that
sort of thing and they keep trying to get recognition
and why bother? INTERVIEWER: You went to Bronx
Science and Andover and then you went to Harvard? MINSKY: Right INTERVIEWER: Which of those
institutions was more liberating in terms of your
being able to think broadly about subjects as opposed
to just learning what was given to you? MINSKY: I think I was incredibly
lucky all my life because when I went
in grade school I had interesting teachers. And there was a school for
unusual students in New York then opened up called
the Speyer School. And it had lots of
unusual children. And then we-- and so I in that
environment I was with people who were more or less my own
age but who knew a lot. And I've had the good fortune
always to be in that situation by one accident or
maybe my father's clever planning or something. And then I went to Fieldston
school in New York-- Ethical Culture School, which
was a wonderfully smart place with extraordinary teachers. In fourth grade I had somebody
who saw that I had read a chemistry book and he gave me
the laboratory, and I was allowed to synthesize
things and so forth. And I had some friends there who
I'm still in contact with because they were
good thinkers. Then their High School of
Science was a miraculous place because most of the high school teachers were PhD refugees. Because we're talking about the
early 1940s, and all the smart people in the world who
didn't get killed came to America, as far as
I could see. In fact, when I got to Harvard
the first thing was, gee these kids are not nearly as smart as
the ones at the High School of Science. And they're finding these
courses hard and it seemed like a step back. Andover, I was there just for
a year and I had a calculus teacher who was also the
wrestling teacher and that was pretty good. But most of the year was
exceedingly-- and a great english teacher, Dudley Fitts,
who translated plays from Greek and back-- but the
kids were mostly jocks of various sorts. But then Harvard again was a
world of good fortune because I met a great mathematician
shortly after I got there named Andrew Gleason. And I didn't know it at the
time, or maybe he didn't either, but he was one
of the greatest mathematicians in the world. And the math department had 10
or 20 people, each of whom had created some field
and so forth. I met a great psychologist,
young assistant professor named George Miller, who is now
recognized as one of the pioneers of cognitive
psychology. And when I met him, he knew I
had read this McCulloch-Pitts paper and he said he couldn't
understand chapter three also. And I said, well don't
worry about it, it looks like it's wrong. And we became great friends
because no one else could-- and in fact, when I did invent
learning machine, George Miller-- first he gave me a
laboratory in the psychology department when I was
an undergraduate. And then he got money from the
Air Force or somewhere to actually build this
machine so forth. So when I was an undergraduate
I actually had a couple of laboratories. Another professor, Welsh, John
Welsh, when I told them I was interested in neurons, he gave
me a big laboratory because Harvard had just built this
gigantic new biology building with hundreds of rooms. And it had been designed with
some foresight so that it was more than twice as large
as anyone needed. And I happened to go there and
say, I would like to do some experiments. Somebody said, well here's
this suite of rooms. Which included a black
photographic laboratory where you could go in and experience
sensory deprivation. And all sorts of equipment. And so I got to-- I was interested in the
inhibitory nerve of the crayfish claw. It turns out this is
a wonderful animal. Welsh recommended it because the
nerves in this thing are so big that you can see them. And if you have a magnifying
glass, you can really see them. And you can move them around
with the tweezers and connect them to your alligator clamp. And here you're doing
neurophysiology with a screwdriver. And the crayfish doesn't seem to
mind having it claw snipped off because it snaps off. It has a detachable joint. And it sits in its tank for the
rest of the year and grows another one. So I didn't feel any ethical
problems about this wonderful animal. INTERVIEWER: Did some of that
work-- did some of that work motivate your development
of optical devices? MINSKY: Well maybe the most
important experience I had was meeting Warren McCulloch. And Warren McCulloch was a
philosopher and physiologist and great poet. I think maybe a hundred years
from now, he will be seen as one of the great philosophers
of the 20th century. Right now, he was sort of well
known in the early days of cybernetics, Norbert
Wiener and gadgets. But he's been forgotten. And he would look at a problem
and think of some new way to do something with it. And one thing he was I believe
the first person to invent circuits that would work if you
break any part of them. How do you make a reliable
circuit that's redundant enough that it will correct
some errors at least? And he wasn't good at
mathematics but he just worked out all the possible simple
examples of this and found one which was self-repairing. Did many things like that. And I followed him around for
couple of years and I think the reason I developed so well
in this field is that I didn't listen so much to what Andrew
Gleason said when he showed me how to prove some theorem or
what McCulloch said about his particular theory but I was
always asking, how did he think of that? And sometimes I'd ask him and
he'd tell me some wrong theory because nobody knows how
they think of things. But the idea that what students
should learn from their teachers is how
they work, not the subject they're teaching. And I think it was just a
great accident that I encountered this Warren
McCulloch who is interested in that as much as-- he was
interested in how he would think and tried to explain it. And even today, which is 40
years later, sometimes when I'm stuck writing something,
for example, I could hear his voice. He's saying, oh that's too
pretentious or that's not pretentious enough. I think I've accumulated a
cloud of these people. There are four or five people
that I worked with for several years and whenever I'm stuck, I
can hear Oliver Selfridge or Dick Feynman or Andy Gleason
saying, oh you're wasting your time, why didn't you look
at it this way? It's almost as though I'd made
little copies of these guys. INTERVIEWER: So there's a
Richard Feynman in your brain? MINSKY: There's a little Richard
Feynman and there's a Theodore Sturgeon, the science
fiction writer who I tracked around with. Because his science wasn't very
good but his intuitions about it were good, and he could
write these wonderful things and create
these images. And I just wanted to know
how to do that. Never got very good at it but
I can sometimes say, what would Richard Sturgeon say or
what would Isaac Asimov say? I have about ten of these
characters that I can exploit. INTERVIEWER: What did you want
to learn from Richard Feynman? MINSKY: How he got such good
examples of things and then made theories. INTERVIEWER: What period
did you work with him? MINSKY: I think I had met him
in the '70s, early '70s. It was interesting. I was traveling around Los
Angeles with a friend of mine, Edward Fredkin, who was a
incredibly innovative thinker. Ed has discovered all sorts of
little theories of things. He started the first company
that did image processing and word processing and that
sort of thing. And he was one of three or four
people I've known who we're never afraid
to do something. Normally when you say let's do
this or that, somebody will say, well that would be very
hard and so forth. But there are three or four
people that I've known-- John Mccarthy, Oliver Selfridge,
Feynman-- if you think of doing something
he'll say, let's do it. And then we'll do it the
next day or right away. And usually get anything done--
to get anything done-- you have to convince a lot
of people and make a plan and so forth. But I've worked with a few of
these people who say, well if that's a good idea maybe,
we can do it tonight instead of next year. So until the 1980's, I never
wrote a proposal. I just was always in the
environment where there would be somebody like Jerry
Wiesner of MIT. John Mccarthy and I had started
working on artificial intelligence in about 1958, or
'59, when we both came to MIT. And we had a couple of students
working on it. And Jerry Wiesner came by once
and said, how are you doing? And we said, we're doing fine,
but it would be nice if we could support three or four
more graduate students. And he said, well go over and
see Henry Zimmerman-- or I think it Zimmerman-- and
say I said that he should give you a lab. And two days later we had
this little lab of three or four rooms. And a large pile of money which
IBM had given to MIT for the advancement of computer
science and nobody knew what to do with it. So they gave it to us. And-- INTERVIEWER: Not a bad move. MINSKY: Right. And for many years that
kept happening. We'd think of something
to do and-- I had a great teacher in
college, Joe Licklider. Licklider and Miller were
assistant professors when I was an undergraduate. And we did a lot of little
experiments together. And then about 1962, I think,
Licklider went to Washington to start a new advanced research
project that was called ARPA. Just started up. And he said to them, well there
some people at MIT who have all sorts of nice ideas. One of our students have built
a good robot, another had built a machine that showed
promise of doing some good vision research,
Larry Roberts. And then Larry Roberts also had
this idea of an intranet. There were a few-- I mean that I idea came
from several people. But Licklider got them him and
Ivan Sutherland to come to Washington to help run
this department. So now I was in this situation
that Licklider had sent a big budget to MIT to do
time sharing. Which had been invented by John
McCarthy and Ed Fredkin and a few other people. Also it'd been invented in
England about the same time. INTERVIEWER: Time sharing
computing that was different than batch processing? MINSKY: Yes, using a computer
with multiple terminals. In fact, we went to visit IBM
about that. and we went to Bell Labs also to suggest that
they be working on that. And the research director
at IBM thought that was a really bad idea. We explained the idea, which
is that each time somebody presses a key on a terminal it
would interrupt the program that the computer was running
and jump over to switch over to the program that was
not running for this particular person. And if you had 10 people typing
on these terminals at five or 10 characters a second
that would mean the poor computer was being interrupted
100 times per second to switch programs. And this research director said,
well why would you want to do that? We would say, well it take six
months to develop a program because you run a batch and
then it doesn't work. And you get the results back
and you see it stopped at instruction 94. And you figure out why. And then you punch a new deck of
cards and put it in and the next day you try again. Whereas with time sharing you
could correct it-- you could change this instruction right
now and try it again. And so in one day you
could do 50 of these instead of 100 days. And he said, well
that's terrible. Why don't people just think more
carefully and write the program so they're
not full of bugs? And so IBM didn't get into time
sharing until after MIT successfully made
such systems. And years later I had a sudden
flash of really what was bothering this research
director. I think he said, well if
somebody interrupted me 100 times per second, I would
never get anything done. But-- INTERVIEWER: Identifying an
important difference between humans and computers that IBM
apparently didn't fully grasp. MINSKY: Right. And making computers easier to
use I suppose wasn't in their business interest anyway. Making them solve problems
faster, that is. I'm sure making them
easy to use was. But that came from-- none of the
large companies ever did very much for computers. It was all hackers here and
there and their ideas gradually filtered up. INTERVIEWER: Two questions. You described that your calculus
teacher at Andover was also wrestling coach? MINSKY: Yes. INTERVIEWER: Did you try out
for the wrestling team? MINSKY: I was in wrestling class
and it was surprisingly interesting. But one day-- in fact I got to
be where I thought was pretty good-- but I was in the class
of people from some weight up to 137. And then one day they weighed us
and he said, well you weigh 138 so now you have to be in
this class-- the next class-- which is from 138 to
147 or something. And then I was the worst one. And I decided there was
nothing to this skill. And generally I developed an
attitude toward sports which is that there's absolutely
no point to it. The people who are good at it
are maybe 2% faster in some reflexes and they're a lot
slower in others like worrying about whether they're
going to get hurt. And there's just no point. And when I see 20,000 in a
stadium watching them, my first thought or last thought
is, why don't they hire one very good critic
to watch them? What's all-- it's a waste of
time to have 20,000 people having mediocre thoughts
about it. Why don't they just hire someone
who will evaluate it. And basketball is the best
example because it isn't even statistically significant. If you see a score like 103 to
97, that's less than one sigma and one shouldn't regard that
as a victory at all. So it's very unscientific. INTERVIEWER: Right. And I think it's safe to say
these ideas here are probably unlikely to catch on
in any big way. MINSKY: Well eventually they'll
catch on, but it might be a couple of hundred
years before the culture has adapted. INTERVIEWER: Describe
how you came to MIT. What motivated you or what
reputation about MIT made this place a home for you? MINSKY: Oh. Well when-- as I said, I
went to Harvard as an undergraduate. And it was wonderful. And I had a neurology lab
and a psychology lab and that was great. Most students never got that. I happened to be at the right
place at the right time. And I was majoring in physics
for awhile and my grades weren't particularly good. So I thought I should
make up for that by writing a good thesis. And it turned out you couldn't
write a thesis in physics. They just didn't have
a bachelor's thesis. So Gleason said-- my
mathematician friend-- said, why don't you just switch
to the math department? You can write a thesis there. So I wrote a nice these about
fixed points on spheres. It was pretty exotic. I sort of-- there was an
unsolved problem and I solved half of it. Got some really striking
results. INTERVIEWER: What
was the problem? MINSKY: The problem was: it
was known that if you have three points on a sphere--
supposed you have a sphere. It's like the earth,
it has an altitude. It turns out that if you take
three points around the equator, equally spaced, there's
some place you can put them where the altitude of all
three points will be equal. In other words, if
you had a three-- Well, that's the theorem. It was proved by a Professor
Kakutani at Yale. But it was only true for three
points on a great circle. And it seemed to me this ought
to be true for any triangle. That is, you ought to be able to
put it somewhere and rotate it so that-- like a three-legged
stool-- so it would stand straight up. I couldn't quite prove that, but
I proved that it was true for several different
shapes of triangles. And then I got stuck and a
couple of years later, this strange person I never heard of
named Freeman Dyson proved the thing in general. And he sent me this paper and I
didn't believe anyone could possibly be that smart. Because it was just
so strange. But anyway, Gleason said I
should just switch to major in mathematics. And I could write a
thesis and I did. And then I said, this is-- and
I was a senior-- so I said, well I'd like to stay here. And Gleason said, no. You should not stay here. You've been here four years and
you've observed a lot of what we have to teach
you here and now you must go to Princeton. So I felt very rejected. It turns out Princeton was the
other place that had the other half of the great-- well, they
were all over the place, but Princeton had another
full set of great mathematicians who had-- Sure. Von Neumann. Von Neumann who became
my thesis adviser. Not quite my thesis adviser. But anyway, I felt rejected
but I said, well, okay. And I went to Princeton. INTERVIEWER: Godel. MINSKY: There was Princeton
and Godel. I had lunch with Godel once. He was wearing gloves because
he was afraid of germs. Einstein, who I couldn't
understand because I wasn't used to German accents. Anyway, that was
a great place. And then I met Oliver Selfridge
who was this pioneering researcher who had
known McCulloch and Pitts. In fact, he had been Walter
Pitts' roommate. And he was at Lincoln Lab and
he invited me to join his research group there who were
inventing all sorts of things. and then I got a message from
the MIT math department inviting me to come and
be a professor. So all this happened. I was just being
pushed around. I never actually made
any long range plans or applied for anything. INTERVIEWER: I suspect that when
future historians look at the significance of your work,
random is probably not a word they would use to describe
your achievements and the places you've gone and the
things that you've done, even though you sort of naughtily
describe it is as random. MINSKY: Well I was always trying
to find the simplest way to solve a problem. And I think that could lead
anywhere because nobody knows what the simplest solution is. INTERVIEWER: What was the
opportunity at MIT when you arrived in the math
department. What did you see needed
to be done, and were there rooms and labs? You described what Wiesner got
you there right off the bat, but what sort of things were
yours for the taking? MINSKY: Well, of course
they did. When I arrived at MIT, McCarthy
I think had been there for a year. And he already had laid the
groundwork for catching students and potentially good
mathematicians and perfecting them into being computer
science. Computer science was just
growing in the-- when you're talking about 1960 weren't
very many series at all. And today I'd say that computer
science is a whole new area science that was never
even imagined for except by a few pioneers like Godel
and Turing and Post. A handful of people had had
visions that there would be something like mathematics for
complicated processes. Mathematics itself is
really only good for very simple things. Because if you have 10 or 20
equations of different times, there's nothing you can do. What you can do is take one or
two equations and study them very thoroughly and build great
towers of theories about those things. But if there are ten different
things interacting mathematics is helpless. Computers are helpless at
understanding them but in some sense you can-- they let you
experiment with things you could never do by hand
or in your head. And so then you could discover
phenomena and then simplify things down to see, well
where does this new phenomenon come from? And what part of the
system caused it? And progress comes from taking
a complicated thing with behavior you can't understand
gradually breaking it down. Of course some things no one's
ever broken down, and we don't understand them. INTERVIEWER: Is it fair to say
in the beginning before your artificial intelligence lab got
started that sort of high road of mathematics was algebra
and the sort of abstract complex mathematics
geometry? And that computer science was
view it as more of an applied science, more of an engineering
kind of low-road? MINSKY: Yes. Applied mathematics was not
very-- was not filled with so many great ideas as pure
mathematics, which took very simple sets of axioms
or assumptions and build huge towers. In fact when I was in graduate
school the most exciting thing in the world was this-- to me--
was this field called algebraic topology. Topology which is the principals
of geometry where you don't actually care about
the shapes of things but just the properties of the shapes. Like are all the parts connected
in a simple way or are there holes in it or is it
twisted or things like that. And strangely, the hardest
problem-- it was a hundred year old problem practically--
was this: suppose you have a plain two dimensional surface
and you draw a curve that never crosses itself
but it closes. So in topology that's considered
a circle because you just care about how it's
connected and you don't care about it's shape. Well everybody knows that if
you do that then there's exactly one inside and one
outside it divides the plane into these two sets of things
and not three and no one had ever proved that. The first proof was
around 1935. So this was called the
Jordan curve theorem. I'm not sure whether Jordan had
the first solution or was the first one to state
the problem quickly. And it's sort of obvious
that it's true. And the reason why it's hard
isn't it what if the curve isn't really smooth but
it wiggles a lot. If it wiggles an infinite number
of times before it gets here, maybe there could be some
little part of the plane that you sort of almost
outlined it. That's a very strange-- there
is a-- in three dimensions there is a strange phenomenon
that I think they're called Antoine sets. There was a mathematician named
Antoine who discovered this very simple example. Imagine a regular chain--
bicycle chain-- not a bicycle. Yeah. Anyway, you have a
link like this. So here's two links. So now let's have another link
and another link and that's called a change. Now close it by putting a last
link so now are you have a ring of links. Now if you put a string through
the middle of that, you can't get it out. There's nothing stopping it. But you bump into this chain and
well you could try to push it through but that
wouldn't help. And so here's an Antoine set. Each of these links does not
divide space up very much. But it does have the property
that, if something's going through it, you can't
get it out. Because it hits the walls. Now here's this chain where
there are a lot of links, and none of them are touching
each other. So that shouldn't make much
difference, should it, if they're not touching
each other. And yet, if you put a string
through-- if you had one link here and a string you could
just go around it. With this Antoine chain, a
regular chain is closed, if you have a string you still
can't get it out. But it's not being stop by
any particular link. I mean, it hits this link
so you go here. So what's that? Somehow this said in three
dimensions it's dividing space up in some queer way. And there's nothing like
it in two dimensions. Two dimensions if you have a--
you can't have a lot of little circles room because you
can go around them. In three dimensions you can. In four dimensions it's much
more complicated and nobody knows what happens really. So I got fascinated with that. And this is a long story but
after awhile I finally understood a proof by
a Czech named Cech-- C-e-c-h-- of the Jordan
curve theorem. And of all the experiences I can
remember in mathematics, my feeling of accomplishment was
greater of understanding Cech's proof than anything
I ever proved myself. It is a strange thing, but it
was that this proof occurs in a sort of infinite space of
things all messing around. And step by step he shows
that something happens. And so here's a case-- it's
like appreciating a Shakespeare play when you can't
write your own, but you still might say, oh I understood
something about this play that nobody else did
or something like that. It's not that I created it. So that's always bothered me
that if I do something myself and other people admire it, I
regard it, well I couldn't have done it unless
it was obvious. So that's another way
not to get stuck. If you have a theory and it
turns out to be wrong, great. Now is a chance to do
something better. And you run into people who
don't like their theories being proved wrong. INTERVIEWER: Not a
problem for you. MINSKY: Well, no, because
then now you've got another problem to solve. Can I make a theory that
includes this and the exceptions to it and so forth. INTERVIEWER: How did-- you
mentioned that Wiesner was the sort of impetuous behind the formation of your lab McCarthy. What happened in the beginning
and how did that become a mission to really specify
what's going on in the human brain? MINSKY: Well when I built this
learning machine-- that had really started before-- this
was a machine that made connections between things. If you would give it a little
problem-- usually the problem was something like a little rat,
simulated rat, in a maze and if it managed to make the
right turns to get the cheese, then you reward the machine. And it changed the probability
that it would take the-- it would increase the probability
of taking the same paths again. And so it did in fact learn to
solve some simple problems. But after awhile, I could see
that it wasn't going to understand how it solved them. So it couldn't-- what was
missing is it could accumulate conditioned reflexes all right,
but it couldn't say, oh I've learned a lot about this
and I still haven't done this and what's the reason? And I might have gotten stuck
with this machine for a long time except another friend of
mine named Ray Solomonoff had found a different theory of how
to take a bunch of data and make good generalizations
from it. Because you can think of any
theory of learning as saying you've had a lot of experiences
is there a sort of higher level, simple thing that
they're all examples of. And Ray Solomonoff invented
this other way of making generalizations. And I instantly said this is
much better than everything the psychologists have done
since Pavlov and Watson and so forth in 1900. And so suddenly seeing this new
idea of Ray Solomonoff-- which also occurred to a Russian
named Kolmogorov about the same time and later
independently a guy named Gregory Chaitin in I think he
was in Argentina-- but when I first saw this new idea by Ray
Solomonoff I got the idea that everything in psychology was
too low level and couldn't handle the right kinds
of abstractions. Ray Solomonoff's theory said if
something happens you must make a lot of descriptions of
it and see which description is shortest because it must
have the most significant abstractions in it. Like if you could have a short
description that gives us the same results as a long one it
must be better or whatever. And over the last 40 years
that-- because I think that discovery was around 1957-- INTERVIEWER: So from the
beginning the search for artificial intelligence was
also a search to define processes that were
fundamentally in psychology. MINSKY: Yes, right. Why do descriptions work? And how do you make
descriptions? And if something happens, in the
old psychology you would somehow make some very crude
representation, an image, a record of exactly what happened
and connected to another one. And that might work to explain
how maybe some fish or simple mammals think, but it doesn't
explain how you could think about what you've recently
been thinking. And the-- so although I was very
deeply involved in trying to improve reinforcement theory
and the traditional statistical theories of
psychology, almost the moment I saw Solomonoff's idea realized
all this stuff could never get anywhere. And that was the reason why my
learning machine couldn't transfer what it had learned
from this maze to another maze that was similar and so forth. INTERVIEWER: What kind of
machines did you work with in those early days to do your
experiments and to demonstrate your results? MINSKY: Well in the early
days there were relays and vacuum tubes. And you could build almost
anything out of relays although it was slow. Because Claude Shannon had
published a master's thesis in I think 1947 giving-- Shannon's was a remarkable
discovery. He made two major discoveries,
each of which started a whole new field and solved almost
all the problems in it. So in 1947 I think it was, he
published this master's thesis about switching circuits. And nothing much has happened
since then. That was the whole thing. And then 1950 he published this
theory of the amount of number of symbols it takes to
represent some information. And for about 10 years people
worked on various aspects of theories and found better
proofs for the ones in Shannon's 1950. But essentially he had solved
all the important problems. INTERVIEWER: So vacuum
tubes, relays? MINSKY: Right. And the SNARC machine-- this
neural analog reinforcement machine-- it had about 400
vacuum tubes and about a couple of hundred relays
and a bicycle chain. So that when something happened
and you want to increase a probability, a little
motor would turn on and a bicycle train would turn
a volume control. It was all electromechanical. You could now write a
description of such a machine with a hundred computer
instructions I suppose and make it run a billion
times faster. INTERVIEWER: What was the
biggest technological breakthrough in those early
years that allows you much deeper access to the kinds of
questions that maybe you were limited in exploring with
the early computers? MINSKY: I think a very simple
one, namely the invention of the language Lisp-- L-i-s-p-- by John McCarthy. Lisp is a language-- computer
language-- where there are only about eight or nine
basic instructions. But these instructions are
arranged in the structure called a List. And most of the instructions are
on how to change a List. So in this language, you can
write a computer program. And then you can write another
computer program that will edit and modify the first one. Now we have languages--
the popular languages today are still--- Lisp is 1960 I would say. The great languages today
are to a fairly large extent pre-1960. Because they can't understand
their own instruct-- it's hard to write a program in C that can
understand a C program and say, oh if this happens I should
modify the program to do this or that. Now you can in fact do that. And people become so expert at
this that they can write Lisp-like programs in C or Java
or these other programs. But it's hard. And I shouldn't knock them
because they're easier to learn to be good, to write
complicated programs with. But the big change was going
from thinking of a program as a sequence of commands to
thinking of a program as structure sitting in the
computer that another program can manipulate. So this made it possible in
principle to make a program that could even think
about itself. Now no one actually
did that much. And that's what I'm still trying
to start: a project which has programs that mostly
spend their time thinking about why those programs
themselves succeeded or failed to do something else and have
access to a lot of advice about how can I change myself
to be better doing. And this is just not catching
on right now and I'm having trouble convincing other people to go in that direction. INTERVIEWER: Yeah. You're stuck for the moment. MINSKY: Right. But I think I'll write a book
about something else until they get ready for it. INTERVIEWER: Your book, 'Society
of Mind' presents a matter for the operation
of the brain that is fundamentally different from
traditional psychology. What is that? MINKSY: Well most of traditional
psychology tried to imitate physics
and physics had a wonderful modus operandi. You see a phenomenon,
what should you do? And one thing you could do is
find the simplest sets of laws that would predict that. And for example Ptolemy
tried to explain the behavior of planets. And he said, well they were
going almost in circles it seems but they're not
quite circles. So what could we do? We could-- maybe we could
combine a little circle that's turning faster with
a big circle. And so if the planet looks like
it's a circle only it's going a little too far here and
there, let's add another circle which is added to
the first one but it goes happens fast. So then it'll bulge out here. So that was called epicycles. And the ancients use that too--
this idea by saying well everything's made of circles but
their circles are rotating at different speeds and
different sizes. And it takes quite a few
circles to match the description of a planet,
planet's motion. And Kepler discovered that you
could do it much better with just one ellipse. Ellipse is slightly more
complicated than a circle because it's like it has two
radii rather than one. So it's little worse. And that made a tremendous
difference because that explained the behavior of
planets to great precision. Eventually you discovered that
the orbit of Mars is a little bit affected by the orbit of
Jupiter so it's not quite an ellipse, and Newton discovered
the right law which was even simpler. Which is that planets attract
each other with a force that's one over the squares the
distance times the mass. And Newton discovered three
laws which did almost everything for mechanics. It wasn't-- it explained
everything except electricity. And Maxwell added a couple more
laws-- four more laws-- so now we had seven laws. And Einstein discovered that
Maxwell laws could be reduced to much less. And so physics ends up
in Einstein's time with about five laws. And then things got worse
because quantum mechanics was just being discovered-- partly
Einstein's fault. INTERVIEWER: Psychology was
attempting to mirror-- MINSKY: Psychology was-- I called it physics envy
in honor of Freud. Psychology said, well we've
got to find four or five simple laws that explain
learning. And if you look at old
psychology textbooks everybody has a little set of laws. Like the most obvious phenomena
that everybody observed is if you ask a person
to remember a list of ten items, you say I went to
the store and I bought some soda and cabbage and
spinach and chicken wings and so forth. If you say 10 of those and
you say, what did I buy? The other person will say, well
I know that you bought cabbage and a soda and chicken
wings and they won't remember the ones in between. So people made up a law of
recency which says that you remember the most recent
thing most. And then they made up a law of
primacy which is you remember the first thing most. And maybe there's another law
which is you remember the loudest thing most. And for many years psychologists
tried to imitate Newton to get a small handful of
laws to explain how memory works or perception works
or this or that. There was one honest
psychologist named Hall who did the same thing, only
he got 120 laws. INTERVIEWER: How do you
develop an artificial intelligence slash psychology
theory that doesn't have physics envy? MINSKY: Oh well, yes, the last
paragraph would have been-- so this Occam's razor or finding
the simplest theory worked tremendously well in physics and
it worked pretty well in some other sciences. But in psychology it does
nothing but harm. And a simple reason is that we
already know that the brain has 300 or 400 different
kinds of machinery. And they're each somewhat
different. We know how three or four of
them work like a little bit about the visual system
and the cerebellum. But we don't know how any of
the dozens of brain centers work in the frontal lobe and
the parietal cortex and the language areas and so forth. But one thing we can be sure of
is the brain wouldn't have 400 different kinds of machines
unless they were specialized and behave in
different ways and do different things. So 'The Society of Mind' starts
out pretty much in the opposite way and says, let's
take all those things or a lot of the things that people do
and try to find simple explanations for how each
of those could be done. And then let's not try to find a
simple machine that does all of those but let's try to find
some way that a few hundred of these different things could be
organized so that the whole thing would work. And in fact the book didn't
get-- my first book, 'The Society of Mind'-- didn't come
up with a good theory of that. So it's basically what we call a
bottom-up theory which takes a lot of things and different
phenomena and explains them in different ways. INTERVIEWER: You know it was a
bottom up theory-- how do you figure out how these work? MINSKY: Right. So 'The Society of Mind' had
four or five ideas about how these would be organized. But most of the book is-- it
has I think great many good ideas about how different
aspects of thinking work, but it doesn't have a picture of
how they're organized. And the new book, which took
another 20 years, is top-down. And it says that, let's start by
imagining that the mind has a lot of different things it can
do-- resources like a room full of different computers--
then which should you use and when? And the answer is that-- or
the answer I proposed-- is that you have to have
some goals. So there's a chapter on what
goals are and how they work. And that chapter comes from
some research in the early 1960's by my colleagues Alan
Newell and Herbert Simon. In fact, if I can interrupt
myself, for many years there were just two major centers
of research in artificial intelligence. One was the group with McCarthy
and me and later Papert at MIT. And the other was Simon and
Newell at Carnegie Mellon. And they also had a crowd of
students who produced great little theories. And their students went other
places and the field spread. Anyway, one thing that Newell
and Simon did was to develop a theory of how would a
machine have a goal? And their answer was-- it's
sort of like the kind of feedback things that Norbert
Wiener described in cybernetics, but
it's different. In order for a machine to have
a goal, it has to have some kind of picture or
representation or description of a future situation. Now you'd say informally
and it would like to have that situation. But like is no good, because
that's burying -- that's what you're trying to explain. However, what it could have is
a machine that also has a description of what you have now
and this other description of what you would "want". And it find differences
between these and eliminates them. So having an active goal is to
have a description of a future situation and the present
situation. And then a process which does
things to make them the same by changing the present
situation. Now another thing you might do
is change your goal of the future one and that's if you're
really uncomfortable and that you want something
and can't have it then you could always just change, edit
the goal and settle for something less. But that's another story. So the new book is a little bit
centered around this 1960 idea of Newell and Simon which
I called a difference engine. And it says, in order to
accomplish anything, the brain has to be full of difference
engines of various sorts. In other words, there's
no point in just reacting to things. You have to react to things in
order to reduce a difference that you don't. Like if you're hungry you have
to get food or reduce the absence of food. So you could always express
things this way. Okay. So, that's a simple picture
of an animal. It has a set of goals,
namely these difference reducing machines. And has to have some machinery
to turn some goals on and off. So whatever those are, let's not
worry about them because they would have-- whatever they
are, they evolved because the creatures who didn't
have them died out. Animals that lost the urge to
eat, it doesn't matter what else they think, they'll
go away. So what's the next step? Well the next step is what if
you-- then you need another mechanism which says, well I've
been pursuing this goal for a long time and
nothing happened. So I call that a critic. The critic says, there's
something wrong-- something has gone wrong with
what you're doing. The best example is, I kept
doing things and I didn't achieve a goal. So what should I do? I should change my strategy. Of course I could change
the goal too. And sometimes we do that. But the main idea is to think of
the next level of the mind as a bunch of critics which are
watching what's going on and looking for failure. If it's a success it's
not important. Then you do something at low
level psychology to make that more likely to happen. That's trivial. Obviously you want to learn what
was successful, but it's not profound. However, if it fails, you have
to do something really good. Namely do something new instead
of-- so traditional learning is how to do
something old again. That's why I discarded it in the
1960's because Solomonoff suggested another way. So the top level of the new
book called 'The Emotion Machine' is that the important
thing that neurologists should look for are these things called
critics which fire off when some effort to achieve
a goal fails. And what should you do? Well you should think
in a different way. You might change some of the
goals or at least sub-goals and then you have the chance
to succeed again. So that's the other idea. And that becomes a very rich
idea because that leads you to ask, what kind of goals
do people have? And there's two answers
to that. One is that evolution provides
the vitals ones like if you don't have-- if you don't
drink enough, you'll die and so forth. INTERVIEWER: And also ideas
like consciousness become manifestations of these very
complex critics interacting and reaching some sort
of threshold. MINSKY: Right because
suppose some critics don't work either. Then you want to hire one which
says what's wrong with the critics I've been using. And that's means you have to
start thinking about your recent thoughts. You stop thinking about what
you did and you think about how you were thinking. And to me the word consciousness
is the name for-- there is no such thing as
consciousness, but there's dozens of processes that involve
memories of what you've been recently doing and
new ways to represent things and describing things in
language rather than images and so forth. And in fact the existence of the
word consciousness is the main reason why psychology-- you
can ask why is Aristotle writing, and William James
writing just as good as popular writing today
in psychology? If you read William James you
say, oh, he's better than this guy who's telling you
how to think. If you look at Aristotle, you'll
see his discussion of ethics is just as good is
this President's ethics adviser and so forth. It's because they got stuck
thinking that these words like consciousness and ethics are
things rather than big complicated structures that
might have a better explanation. So anyway, we have five or
six levels of thinking. And the higher level is one
where you make models of your whole self but they're
simplified. And you say, what would happen
if I did this without actually doing it. Stuff like that. Those are the things we call
consciousness and I think there's about 50 of them. INTERVIEWER: Let's talk a little
bit about the span of time that you were at-
you've been at MIT. It spans the late 1950's
to the present day. Think for a moment about
how students have changed in that time. But before you answer, tell
me a little bit about the beginnings of the Media Lab
and how you got involved? MINSKY: That's in the
middle of all this. INTERVIEWER: Right. MINSKY: Well when I came to MIT,
the first thing is that it's hard for people today to
imagine what it was like to be in a golden age. World War II was over. If you were a kid who wanted
to learn about electronics, you could go to something called
a surplus store-- which I did excessively-- and you
could buy a gadget that today would have cost half a million
dollars, full of parts and gears and take it apart
and rearrange it. There are no surplus
stores today. If there were, they would have
these integrated circuit things that you can't understand
and take apart. And you'd have parts with
80 pins or 200 pins. A computer processor is
just intractable. So first of all we had
free laboratories. Also, we had companies like
Heathkit which for a very small price would sell you the
parts to make an oscilloscope. So tens of thousands of young
people were making their own laboratories. What else about the
golden age? Well, there were these great
professors who came from Europe and in fact filled
high schools. Okay, when I got to MIT the wars
were over-- the new one was about to start, I suppose,
but that's another story-- and the universities
were expanding. MIT was growing. It never-- it decided
not to grow much. MIT has 4,000 undergraduates
and it's going to stay that way. And Caltech has 1,000. Boston University has 50,000. So some universities grew. But what happened at MIT was
that the faculty grew. So there were more graduate
students than undergraduates. And there are more laboratories then you can imagine. So every student at MIT, if
they want, can be in a laboratory. It's heaven. If you want to do something,
the world is open to you. And now, as an assistant
professor, here I am. And I have the smartest people
in the world, as far as I could tell-- I got some names. Ray Kurzweil wrote me from high
school and he became one of the great adventures
of the century. Gerry Sussman I knew, Danny
Hillis from high school. These were kids who wrote and
said, I hear you're working on making thinking machines. So I didn't do anything. I was just there in
the right place. And the world was beginning to
hear about cybernetics and AI and so forth. And the right people
just came. And as I said before, Jerry
Wiesner kept getting money and Larry Roberts and
Joe Licklider. Every time we needed something
or room for more students, somebody would hand it over. This stopped around 1980 by a
strange political accident. Senator mansfield who was a
great liberal decided that the defense department might be a
dangerous influence and he got Congress to pass some rule that
the defense department you can't do basic research. It should only support research
with military application. It's a great example of whatever
you want to call it. INTERVIEWER: Unintended
consequences-- MINSKY: Of shooting yourself
in the head. And, but anyway-- INTERVIEWER: But right after
that, the Media Lab. MINSKY: Okay. Then now there was a thing--
we're doing this interview in this very building where
Nicholas Negroponte-- who had had some training as an
architect-- got this idea that computers were going to
be important and media was going to change. And by the year 2000 there
wouldn't be any paper anymore. And all sorts of ideas that
were correct except that people didn't do the
right thing. And so he had started
this media lab. And in fact, some of the most
exciting things in computers had happened-- not in the place
you'd expect it to, but in the civil engineering where
Professor Charlie Miller had developed graphics so that
people could envision buildings and move
them around. And Negroponte's, it was
called the Architecture Machine Laboratory. And that was doing similar
things-- finding new ways to improve communication. He invented something called
zero bandwith-- what do you call, television by phone? There's a name. Anyway-- INTERVIEWER: Telemetry? Zero bandwith? No. MINSKY: Well this is a joke. For a long time Bell Labs and
other people had been trying to make television available
over telephone so you could see who you were talking to. And they failed. The technology wasn't ready. It was too expensive. In fact, I had a video
phone it was called. But no one else had one
except Nicholas that Bell Labs gave us. INTERVIEWER: So you used
to call each other? MINSKY: Yes. Anyway, zero bandwith television
was a demonstration made by some students in
the pre-media lab, which was very clever. It was a complicated sounding
processing thing which sometimes could guess what
emotion you had from the sound of your voice. It was better than chance. It could tell when you were
laughing and it could guess when you were smiling. And if your voice lowered and
slowed down it would guess that you had a less
happy expression. So they managed to get a few
graphics on the screen. You're talking to someone and
you're not seeing-- there's no camera looking at the other
person but its guess-- it has a cartoon face. And it was uncanny because it
worked just well enough that it looked like you were
looking at the person who was talking. It disappeared. They never even published it. And I just noticed last week
some laboratory at MIT which said, oh we're going to make
something that listens to the voice and shows the expression
of the speaker. A little 30 year-- INTERVIEWER: Hiatus. MINSKY: But it's the old timers'
fault for not even publishing it. It was just such fun and they
just showed it to each other. INTERVIEWER: And was that
the spirit of the Media Lab in the beginning? Fun, exciting, new ideas,
cross-disciplinary? MINSKY: It was exactly. Nicholas had the idea that
he wanted to expand the laboratory because there was
so many things that he couldn't do in the architecture
department. And he got a wonderful idea, how
would you fund this kind of research which nobody
was doing? And he went to companies
that didn't even know what research was. And they all piled in. They said, oh we're worried. Various newspaper chains for
example heard the prediction there wouldn't be any paper
pretty soon because everybody would have things like iPhones
and they wouldn't need paper. Well that didn't happen for 30
years more than Nicholas expected but who cares. They were scared so they started
giving his new Media Lab money to say what's--
at Steelcase-- wonderful company-- they began to realize
that since you could work from home, maybe people wouldn't need office furniture. What will happen to them when
the office disappears? So some visionary people at
Steelcase gave us-- first they gave us a lot of office
equipment. All the chairs-- not this one--
but all the chairs in the Media Lab were
really deluxe Steelcase modern things. But anyway, the nice thing
about the media lab was Nicholas's inspiration to see
yes you can fund research if you explain to people
why they need it. And for almost 20 years it was
just like the golden age that I got into when the
AI Lab started. The Media Lab again started. Enough money poured in that we
could do anything we wanted. It had enough sponsors that
whatever you did, one of the sponsors would be pleased. And Nicholas invented a kind
of sharing of property and rights and so forth that there
was great happiness from-- I think it started in 1984
or 1985-- '84 probably. INTERVIEWER: So during
your period-- MINSKY: And for 20 years. Now it's getting harder to
support because these ideas have spread and the Media Lab is
working hard right now to-- what's the next revolution? Can the new director
reproduce this? I certainly hope he can
but it might have been a historic moment. Nicholas has said that he could
never have started the lab 10 years later because they
were doing things that the other departments
were doing by then. INTERVIEWER: So it sounds like
what you're saying is that in your period at MIT you scored
not one but two golden ages. A big one and a little one. Not bad. MINSKY: And the second golden
age, starting in about 1963, I started to work with
Seymour Papert. And I'd never been interested in
education and that side of engineering. So we worked together
for 20 years. And then when Nicholas started
the Media Lab he had some great engineers and great
hackers of all sorts. And he also invited
me and Papert. And we started to move
our activities. So my artificial intelligence
and Seymour's new ways in education started to develop
here in this new environment. And again, it was a golden age
in the sense that if we got an idea there would be someone
to support it. Today things are different. The United States has very few
basic research institutions. The government is broke. If you look at the National
Science Foundation, they are now in the situation where they
can barely fund one out of 100 proposals. Now suppose some scientist
proposes something that's going to take two years. That's fine if he gets it. Suppose 200 scientists do that
and one gets the support. They have spent probably 100
man years of wasted time writing these applications. And so now we might be better
off if we closed the research facility completely. And the United States is headed
down a drastically destructive track. INTERVIEWER: What kind of
students came to MIT in the late '50s and contrast that
with the kinds of students you see today? And how broadly have the
concerns and interests of students at MIT changed in the
time you've been here? MINSKY: I don't think I could
say very much about-- are we recording your question? INTERVIEWER: Yeah. MINSKY: I don't think I have a
very good picture of that. I have a qualitative sense
that we're still getting wonderfully-- we're still
getting some of the best possible students. But we have a lot of machinery
for losing the best ones very rapidly because they take
courses in business and they take course-- a lot
of them will go into management science. As you know, when a
field has the word science in it, it isn't. But it tries. And also if a student gets a
pretty good idea, then in spite of the great internet
bubble of the year 2000 or whenever it was, they can get
support to start a company. What this means is that the
students who do the most exciting research as an
undergraduate or even a graduate student
are very likely not to become a professor. In the golden age, as I
mentioned, almost all my graduate students became
professors. Virtually everyone. Somewhere or other, usually
a very good place. Now very few students become
professors because they get jobs in start-ups or in the
industrial research laboratories like Google and
Yahoo and computer related places like that-- even
Microsoft-- which employ thousands of people who
eventually produce nothing in most cases. It disappears. They just-- sometimes they're
just hired so that they won't go somewhere else. I don't know. But the future is fairly bleak
for students now because they can't look forward to a
career in research. It's just closed. Some are going to China. China is starting research
laboratories where we're closing them. INTERVIEWER: You invented
a microscope. Why? MINSKY: That was
a great story. Well, one of the reasons that
I hung around McCulloch the neurological community was to
find out what they knew about how neurons worked and about
how the brain works. It turns out that there is, to
this day, a great gap in neuroscience. Because we know a great deal
about how individual neurons work and how they connect to
each other through these complicated little things
called synapses. And when one neuron gets
excited it sends some chemicals over to the next one
and these chemicals start new activities. And a lot is known about how
this works and the conditions under which these synapses
grow and become stronger conductors or more quick
to act and so forth. Then we know a little bit about
what happens in the relation between two cells. And almost nothing about
what happens when there's 100 cells. And most of the brain of the
human brain-- and mammalian brains in general-- most of the
brains actually they're not really made of cells so
much as columns of cells. These columns were discovered
around 1950. And most of the brain-- there's
this bunch of 500 or 1000 cells which acts as
a functional unit. And we're just beginning to
find out what these do. In the case of vision,
we know a lot about what the columns do. In the case of the cerebellum
and the hippocampus, we know a little bit. And in the case of the front
lobes where reflective thinking goes on, we don't
know anything at all. And-- well what was
the question? INTERVIEWER: You wanted
a microscope. MINSKY: Yes and one reason--
so one problem was that you could try to guess what these
columns did but you couldn't find a wiring diagram of one. So I started to think about--
all we had were very thin sections with the people
uses diamond knives or broken glass. INTERVIEWER: Microscope
slides. MINSKY: Microscope slides. INTERVIEWER: Two dimensional
slices. MINSKY: Now the interesting
property of the brain is that if you-- of course it's pretty
transparent, there's no pigment in the brain cells to
speak of-- so you have to stain them. And there's no empty space. That tissue is full of cells. Many of them are nerve cells and
others are other kinds of connective tissue cells
and so forth. And the connective tissue cells
in the brain look pretty much like what you'd think
brain cells use. Each of them have thousands
of fibers coming out and so forth. Well, if you stained all
of the nerve cells, the thing is black. There's a wonderful stain
which uses osmium of all things-- rare metal-- and when
it stains a neuron it stains the whole thing. And a neuron may be a whole
millimeter or more in size. And some of its wires go
20 millimeters or more. And if you stain them all then
if you take a section that's more than a thousandth of
an inch thick, it'd be completely black. So nobody had three dimensional pictures of what happens. Because even a thin microscope
slide is so dense that no light-- very little light--
gets through. INTERVIEWER: So you were looking
for a 3D brain viewer? MINSKY: Yes. So the question is, if you can't
get the light through then what can you do? And I thought of-- and one of
the reasons is if-- of course you can get light through if
your shine a bright enough light through. But then this light that comes
through is pretty useless because it's bounced
off something. It's called scattering
and it's going this way and this way. Finally it comes out this
way and you don't know where it came from. And I figured out a very simple
way by combining two microscopes back to back,
looking at the same point, that this microscope-- if light
got scattered before it reached the point you're looking
at by something else into this, then that would be
collected by the second one and it would be no good. But if you put a pinhole at each
end, then any light that went the wrong way
significantly would just get rejected. So now I could use an extremely
bright light and just collect the rays that
straight through and count how many there were. So now almost every laboratory
in the world uses this thing. Unfortunately it took more
than 20 years between the first one I built and the second
one anyone else built so that the patent
disappeared. But I get lots of letters and
emails from people who say, thanks for making this gadget. Funny part is that by the time
I finished it, that was exactly the time when I had read
Ray Solomonoff's paper and decided it wouldn't help to
know how the nervous system is wired until you have high
level series to interpret it. So although I-- after building
it, I used it to look at worms and blood cells and
things like that. I never actually used it
look at it a neuron. INTERVIEWER: It's possible that
not everyone at MIT would smile so broadly if they
described one of their patents expiring before they could
take advantage of it. MINSKY: Well, it's too bad. I could have used-- if I had a
billion dollars I could do my project now. INTERVIEWER: There you go. I could do my project too. Last question: a theme of your
story, in terms of how you describe your success, it seems
to be that you were at the right place at
the right time. That you entered into
a golden age. And that-- MINSKY: One other thing is if
somebody does something better then you, don't waste
your time. Never compete. INTERVIEWER: Right. MINSKY: Always go away and do
something that nobody else does better. So I kept moving around. INTERVIEWER: If you'll indulge
me for just a moment. With the sentimental possibility
that there is something about Marvin Minksy
that has been passed on to the many students that have
encountered you at MIT, what would it be that you imparted to
them other than be sure to be at the right place? MINSKY: That's not a
very useful one. I think the useful one is if
you get stuck don't try too hard to fix it but
find another way. Because if you get stuck it's
because you're not good it good enough at that. And you probably
can't fix that. So find someone else
who can do it. But if you've got stuck it's
probably because you found a really good problem. So find another really
good problem. INTERVIEWER: Is MIT a great
place to find another way when you get stuck? MINSKY: It certainly was. You know that the nature of
things change gradually in the legal structure. When I was a-- I came as an assistant professor
without applying because William Martin, who
was chairman of the mathematics department, thought,
hey I heard there's this good guy at Lincoln and
maybe computers and things like that have a future. I never actually
found out why. And a couple of things
happened. One thing happened is that I was
teaching four courses as things were in those days. And I said I could teach--
no I wasn't. I was teaching two courses
each term. But still if you're teaching
every other day I found this hard. Because after I'd give a lecture
it would take me a day to figure out what I
did wrong with it. And I also needed a day to say,
well what should I talk about tomorrow? And so I couldn't get
any research done. So one day I was walking down
the hall and there was a great mathematician scientist Peter
Elias who was in charge of the EE department. He said, how's it going? I said, well I wish I could
teach all my four courses in one term and do research
the other term. And he said, well why don't
you come over to our department and we'll
let you do that. Oh he said, well what happened
when you asked them? And the math department
said, well what if everyone did this? And I thought, well why not? But they thought it was bad. And then when I got there it
turned out the EE department had so much money that
professors only had to teach two courses. But anyway one day, Peter came
by and he said, oh we decided you should get tenure. And I never thought
about that. First of all, I'd never
had the idea of staying at MIT forever. In fact, I went to
another couple of schools and hung around. And after awhile I didn't like
it there because at MIT practically every student
is really good. And that's just wonderful. Other schools, you'd have to
search for-- anyway, that's beside the point. What's the difference is that
today, when a student-- when somebody becomes an assistant
professor, they have six years to make a reputation and get
tenure and they think about it all the time. And they arrange their career
so they do one big thing instead of several. They don't waste their time. They publish a lot of papers. A candidate for tenure here
might have written 30 papers which are all almost the same. It's a scandal. They write slightly
different papers. They somehow get them
in journals. And they count them. What happens is these people
are so narrow in a field because they're so desperate
to make these points that-- I don't know how to conclude
this paragraph. So the situation is very
different and this is all because of well-meaning civil
rights laws and the promotion process has to be very open and
it's bad if people promote their favorite friends and you
shouldn't promote people from the same institution or it will
get inbred and they have all sorts of rules. So they made this
seven year rule. It's very rigid and turns
out it's six years and it's really five. Because there's also another
law which is if you fire somebody you have to pay their
salary for a year. This has nothing to
do with anything. So really you have to make the
tenure decision pretty almost firm when they're in
their fifth year. So the pressure is enormous. And I-- INTERVIEWER: So it's harder
to get stuck in this era? MINSKY: You're almost
forced to get stuck. And finally the chance of
getting tenure is small because they're not making
many new professors. And you know there's another
factor in all of this which is the longevity-- you're not
allowed to fire people because of age in the United
States pretty much. In England, professors
have to retire at 60. But the age of the-- life
expectancy has been growing three months per year for
the last 50 years. So people are living 12
years longer now than when I started college. So the number of vacancies for
new professors is slowly being eaten away by mere longevity
besides everything else. And so the pressure -- INTERVIEWER: And it begs be
said, another example of you being at the right place
at the right time. MINSKY: Well I'm also not-- I'm lucky not to have gotten
old while I did it. But that's just luck too. INTERVIEWER: Well, thank
you, Marvin.