NOAM
What's called AI and today has departed to basically pure engineering. It's designed
- the large language models are designed in such a way that in principle, they can't
tell you anything about language, learning, cognitive processes, generally. They can
produce useful devices like what I'm using, but the very design ensures that you'll
never understand, they'll never lead to any contribution to science. That's not a
criticism, any more than I'm criticizing captions. CRAIG
This week, I talk to Noam Chomsky, one of the preeminent intellectuals of our time.
Our conversation touched on the dichotomy between understanding and application in the field of
artificial intelligence. Chomsky argues that AI has shifted from a science aimed at understanding
cognition to a pure engineering field focused on creating useful, but not necessarily
explanatory, tools. He questions whether neural nets truly mirror how the brain functions
and whether they exhibit any true intelligence. He also suggests that advanced alien
life forms would likely have language structured similar to our own. Chomsky’s 94, and I reached him at his home where he appeared
with a clock hanging ominously over his head. You're in California. NOAM
Actually, I'm in Arizona, which is on California time. CRAIG
Yeah. So, you know, I wanted to talk to you because you have the, you know, one of the few
people with a deep understanding of linguistics and natural language processing, that has the
historical knowledge of where we are, how we got to where we are and what that might mean for the
future. I understand the, your criticisms of deep learning and what large language models are not
in terms of reasoning and, you know, understanding the, the underpinnings of language. But I thought
maybe I could ask you to talk about how this developed. I mean, going back to Minsky's thesis
at Princeton when he was, you know, before he turned against the perceptron, when he was talking
about nets as a possible model for biological processes in the brain. And then, you know,
how did how you see that things developed? And what were the failures that didn't get to where,
presumably, you would have wanted that research to go. And then, and then I have some other
questions. But is that enough to get started? NOAM
Let's, let's take an analogy. Suppose you're interested in
figuring out how insects navigate. Biological problem. So, one thing
you can do is say, let's try to study in detail what the desert ants are
doing in my backyard, how they're using solar azimus, so on and so forth. Something else
you could do is say, look it’s easy. I'll just build an automobile, which can navigate. Fine
does better than the desert ants, so who cares? Well, those are the two forms of artificial
intelligence. One is what Minsky was after, it's now kind of ridiculed as good
old-fashioned AI, GOFAI. We're past that stage. NOAM
Now we just build things that do it better. Okay. Like an airplane does better than an
eagle. So, who cares about how eagles fly? That's possible. But it's a difference
between totally different goals. Roughly speaking, science and engineering. It's
not a sharp difference. But first approximation. Either you're interested
in understanding something, or you're just interested in building
something that'll work for some purpose. Those are both fine occupations. Nothing wrong
with, I mean, when you say my criticism of large language models, that's not correct. I'm
using them right now. I'm reading captions. Captions are based on deep learning, clever
programming. Very useful. I'm hard of hearing. So, they're very helpful to me. No criticism.
But if somebody comes along and says, okay, this explains language. You tell them, it's kind
of like saying an airplane explains eagles flying. It’s the wrong question. It's not intended
to understand leave any understanding. It's intended to be for a useful purpose. That's fine.
No criticism. And yet, what's called AI and today has departed to basically pure engineering. It's
designed - the large language models are designed in such a way that in principle, they can't
tell you anything about language, learning, cognitive processes, generally. They can produce
useful devices like what I'm using, but the very design ensures that you'll never understand,
they'll never lead to any contribution to science. That's not a criticism, any more
than I'm criticizing captions. Yeah. CRAIG
Geoff Hinton says that, you know, his goal was to understand
the brain how the brain works. And he talks about AI as we know it today, supervised learning
and generative AI as useful byproducts, but that are not his goal or not the goal of
cognitive science or computational biology. Was there a point at which you think the
research lost a bead or is there research going on that people aren't paying attention to
that is not caught up in the usefulness of these other kinds of neural nets? NOAM Well, first of all, if you're interested in how
the brain works, the first question you ask is, does it work by neural nets? That's an open
question. There's plenty of critical analysis that argues that neural nets are not what's
involved even in simple things like memory. Sure, there's arguments that go back to
Helmholtz. Neural transmission is pretty slow as compared with ordinary, there's much
sharper criticism by people like Randy Gallistel, cognitive neuroscientist, has given pretty
sound arguments that neural nets, in principle, don't have the ability to capture
the core notion of a Turing machine. computational capacity, they just don't have the
capacity. And he's argued that the computational capacity is in much richer computational systems
in the brain, internals themselves, where there's very rich computational capacity goes way beyond
neural net. Some experimental evidence supports this. So, if you're interested in the brain,
that's the kind of thing you're looking at. Not just saying, can I make bigger neural net?
Okay, if you want to try it, and maybe it's their own place to look? So, the first question is,
is it even the right place to look? That's an open question in neuroscience. If you take a vote
among neuroscientists, almost all of them think that neural nets are the right place to look. But
you don't solve scientific questions by a vote. CRAIG
Yeah. The I mean, one of the things that's obvious is,
is neural nets, they may be a model, and they may mimic a portion of brain activity.
But there are so many other structures, NOAM
There’s all kinds of stuff going on in the brain, way down to the cellular level. There
are chemical interactions, plenty of other things. So maybe you'll learn something by
studying neural nets. If you do fine, everybody will be happy. But maybe that's not the
place to look, if you want to study, even simple things like just memory and associations. There
is now already evidence of associations internal to large cells in the hippocampus. Internal, which
means maybe something's going on at a deeper level where there's vastly more computational capacity.
Those are serious questions. So, there's nothing wrong with trying to construct models and seeing
if we can learn something from them. Again, fine. CRAIG
Building larger models, which is kind of the rage in the engineering
side of AI right now, does produce remarkable results. I mean, what was your reaction when you
saw chat GPT or GPT-4, or any of these models, that it's just sort of a clever, stochastic
parrot? Or that there was something deeper? NOAM
If you look at the design of the system, you can see it's like, an airplane explaining
flying, as nine do with it. In fact, it's immediately obvious, trivially obvious, not a
deep point, that it can't be teaching us anything. The reason is very simple. The large learning
models work just as well for impossible languages that children can’t acquire, as for the languages
they're trained on. So, it's as if a biologist came along and said, I got a great new theory
of organisms, lists a lot of organisms that possibly exist. A lot that can't possibly exist,
and I can tell you nothing about the difference. I mean, that's it's not a contribution to biology
- doesn't meet the first minimal condition. The first minimal condition is distinguish between
what's possible from what's not possible. If you can't do that, it's not a contribution to
science, that if it was biologists making that proposal you’d just laugh. Why shouldn't
we just laugh when an engineer from Silicon Valley says the same thing? So maybe they're
fun, maybe they're useful for something, maybe they're harmful? Those are the kinds of
questions you ask about pure technology. Take large language models. There is something they're
useful. Fact I'm using them right at this minute. Actions. Very helpful for people like me. Are
they harmful, and that can cause a lot of harm. Disinformation, defamation, preying on human
gullibility, plenty of examples. So they can cause harm, they can be abuse. Those are the kinds
of questions you ask about pure engineering, which can be very sophisticated and clever. The internal
combustion engine is a very sophisticated device. But we don't expect it to tell us anything about
how a gazelle runs. It’s just the wrong question. CRAIG
Although, you know, I talk a lot to Geoff Hinton, and he'll
be the first to concede the backpropagation is not there's no evidence of that. And, in fact,
there's a lot of evidence that it wouldn't work in the brain. Reinforcement Learning. You know,
I've spoken to rich Sutton that's been accepted as by a lot of people as an algorithmic model
for brain activity in, in part of the brain, the lower brain. So, in terms of exploring
the mechanisms of the brain, it seems that, that there is some usefulness, I mean,
as you said there's, on the one hand, people look at the principles. And then they built
through engineering just is the analogy of a bird to an airplane they, they've taken some of the
principles and applied it through engineering and created something useful. But there are scientists
that are looking at what's been created, like Hinton's criticism of backpropagation.
and are looking for other models that would fit with the principles they see in cognitive
science or in the brain. And I mentioned this forward-forward algorithm, which you said you
haven't looked at, but I found it compelling, in that it doesn't require, you know, signals to
be passing back through the neurons. I mean, they, they pass back, but then stimulate other
neurons as you move forward in time, but is there nothing that that's been learned in
the study of AI or the research of neural nets. NOAM
If you can find anything, it's great. Nothing against search. But it's just
that we have to remember you asked about chatbots. What do we learn from them? Zero, for
the simple reason that the systems work as well, for impossible languages as for possible ones,
so it's like the biologist with the new theory that has organisms and impossible ones and can't
tell the difference. Now, maybe by the look at the systems, you'll learn something about possible.
Okay, great, all-in favor of learning things. But there's no issues It's just that the
systems themselves - and there are great claims by some of the leading figures in the field,
we've solved the problem of language acquisition, namely zero contribution, because the
systems work as well for impossible languages. Therefore, they can't be telling
you anything about language acquisition, period. Maybe they're useful for
something else. Okay, let's take a look. CRAIG
Well, maybe for the audience that this is going out to, I understand what
you mean by impossible, but could you just give a brief synopsis of what you mean by impossible
languages for people that haven't read your work? NOAM Well, and there are certain general
properties that every infant knows, already tested down to two years old.
No evidence. Couldn't have evidence. So, one of the basic properties of language is that
the linguistic rules applied to structures, not linear strings. So, if you want to
take a sentence like ‘instinctively, birds that fly, swim’ - it means instinctively
they swim. Not instinctively they fly. Well, the adverb instinctively has to find a verb to
attach to. It skips the closest verb and finds the structurally closest ones. That principle
turns out to be universal for all structures, all constructions, in all languages. What it
means is that an infant from birth, as soon as you can test, automatically disregards linear
word order, and disregards 100% of what it hears, notice, as all we hear is words in linear order,
but you disregard them. And you deal only with abstract structures in your mind, which you
never hear. Take another simple example, take ‘the friends of my brothers are in England.’
Who's in England, the friends or the brothers, the friends, not the brothers, the one that's
adjacent, you just disregard all the linear information means you disregard everything,
you hear everything. And you pay attention only to what your mind constructs. That's
the basic, most fundamental property of language. While you can make up impossible
languages that work with what you hear, simple rule, take the first relevant thing sociate friends of my brothers are here, brothers are
the closest things, and the brothers are here reveal rule much simpler than the rule we use. You
can construct languages that use only those simple rules that are based on the linear order what we
hear now, maybe children, people could acquire them as a puzzle, somehow, using nonlinguistic
capacities. But they're not what children, infants reflexively construct with no evidence, whereas
many things like this impossible and impossible languages. And nobody's tried it out. Because
it's too obvious how it's going to turn out. You take a large language model and apply it to one
of these models. Systems that use linear order, of course, it's going to work fine, trivial
rules. Well, that's a refutation of the system. CRAIG
You mean that if you trained a large language
model on impossible language, if you had a large enough corpus, then it would generate
impossible language. Is that what you mean? NOAM You don't even have to train it because the rules
are simple. Rules are much simpler than the rules of language. Like taking things that are, take
the example the friends of my brother are here, the way we actually do it, is we don't say, take
the noun phrase that's closest we don't do that would be trivial. We don't do it. What we say
is, first construct the structure in your mind, friends of my brothers, then figure out that the
central element in that structure is friends, not brothers. And then let's let us be
talking about the broad the head of it. It's pretty complicated computation. But that's
the one we do instantaneously and reflexively. And we ignore and we never see it, hear it.
Remember, we don't hear structures. All you hear is words, and then you are, what we hear is words,
and then you order we never use that information, we use only the much more look like complex. If
you think about it computationally. It's actually simpler. But that's a deeper question, which
is why we do it. Move to a different dimension, there's a reason for this. The reason
has to do with a theory of computation, if you're trying to construct an infinite array
of structured expressions, the simplest way to do that the simplest computational
procedure is binary set formation. But if you use binary set formation, you
just going to get structures do not order. So, what the brain is doing is the simplest
computational system, which happens to be very much harder to use. Nature doesn't care about
that. Nature constructs, the simplest system, doesn't care about if it's hard to use or not. I
mean, nature could have saved us a lot of trouble. If it had developed eight fingers instead of 10,
then we'd have a much better base for computation. That nature didn't care about that when
it developed and fingered. If you look at evolution pays no attention to function. It
just constructs the best system at each point. There's a lot of misleading talk about that.
You just think about the physics of evolution. Say a bacterium swallows another organism,
the basis for what became complex cells. Nature doesn't get the new system; it
reconstructs it in the simplest possible way. It doesn't pay any attention to how complex
organisms are going to behave. That’s not what nature can do. And that's the way
evolution works all the way down the line. So, not surprisingly, nature constructed
language so that it's computationally elegant. But dysfunctions; hard to use in many ways, not
nature's problem. Just like every other aspect of nature, you can think of a way in which you can
do it better but didn't happen stage by stage. CRAIG
Two questions from that one, one. So, your view is that artificial intelligence
as it's being called, particularly generative AI doesn't exhibit true intelligence?
Is that inside, right? NOAM
I wouldn't even say that it's irrelevant to the question of
intelligence. It's not its problem. A guy who designs a jet plane is not trying
to answer the question how to eagles fly. So, to say, well, it doesn't tell us how eagles fly
is the wrong question to ask. It’s not the goal. CRAIG
Except that what we're what people are struggling with right now. You know,
you've heard the existential threat argument, that that these models, if they get large enough,
they'll actually be more intelligent than humans. NOAM
That's science fiction. I mean, there is a theoretical possibility. You can
give a theoretical argument that in principle, a complex system with vast search capacity
could conceivably turn into something that would start to do things that you can't predict
maybe beyond. But that's even more remote than some distant asteroid maybe someday hitting the
Earth. It could happen. If you read and serious scientists on this, like Max Tegmark, his book
on the three levels of intelligence. He does give a sound theoretical argument as to how a
massive system could, say, run through all the scientific discoveries in history. Maybe find out
some better way of developing them and use that better way to design something new, which would
destroy us all. Yeah, it's in theory possible, but it's so remote from anything that's available
that it's a waste of time to think of them. CRAIG
Yeah, so your view is that whatever threat exists from generative AI
it's the more mundane threat of disinformation and NOAM
Disinformation defamation. gullibility. Gary Marcus has done
a lot of work on this. Real cases. Those are problems. I mean, you may have seen
that there was a sort of as a joke people, somebody developed the deformation of the Pope,
put an image of the Pope, somebody could do it for you duplicate your face. So, it looks more or less
like your face pretty much duplicate your voice develop a robot that looks kind of like you have
you say, some insane thing would be hard. Only an expert could tell whether it was you or
not. It was done already several times, but basically as a joke. When powerful institutions
get started on it, it's not going to be a joke. CRAIG
Another argument that's swirling around these large language
models is the question of sentience of whether if the model is large enough, and this goes a
little bit back to how there's a lot more going on in the brain than, than the neural network
of the cerebral cortex, but that that there is the potential for some kind of sentience,
not necessarily equivalent to human sentience. NOAM
These are vacuous questions like asking, does a submarine
really swim? You want to call that swimming, then it swims. Do you not want to call it swimming?
Then it's not. It's not a substantive question. CRAIG
In the, in the sense that it, it supports the view that that
there is no separation between consciousness in the mind, the
material activities of the brain NOAM
As a separation, that hasn't believed been
believed since the 17th century. John Locke, after Newton's demonstration
said well, this leaves us only with the possibility that thinking is some property
of organized matter. That's the 17th century. CRAIG
But the belief in a soul and consciousness as something
separate from the material biology, it persists. NOAM
People believe in all kinds of things, but within the rational part of the human
species, once Newton demonstrated that the mechanical model doesn't work, there's no material
universe in the only sense that was understood, the obvious conclusion was that since
matter, as Mr. Newton has demonstrated, has properties that we cannot conceive of,
they're not part of our intuitive picture. Since matter has those properties. Organized
matter can also have the property of thought. This was investigated all through the 18th
century ended up finally with Joseph Priestley, chemist, philosopher, late 18th century
gave pretty extensive discussions of how material organisms, material objects could have
properties of thought. You can even find it in Darwin's early books. It was kind of forgotten
after that, rediscovered in the late 20th century is some radical new discovery. Astonishing
hypothesis, matter can think. Of course it can, we're doing it right now. But the only problem
then is to find out what's involved and what we call thinking, what we call sentience. What are
the properties of whatever matter is we don't know what matter is, but whatever it turns out to be,
whatever constitutes the world - what physicists don't know, but whatever it is, the something
- organized elements of it can have various properties, like the properties that we are now
using, properties that we call sentience, then the question whether something else has sentience
is as interesting as whether airplanes fly? If you're talking English airplanes fly. If you're
talking Hebrew airplanes glide. They don't fly. It's not a it's not a substantive
question. Just what metaphors do we like. CRAIG
But what you're saying then is that neural nets may not be the engineering solution,
but that eventually, it may be possible to create a system outside of the human brain
that can think, whatever thinking is. NOAM
Can do what we call thinking, thinking how that whether it
thinks or not, is like asking do airplanes fly, not a substantive question. We shouldn't waste
time on questions that are completely meaningless. CRAIG
Going back to the history, then, you know, Minsky was very interested in the possibility of
nets, neural nets as a as a computational model, NOAM
In Minsky's time, it looked as if neural nets were the right place to look. Now, I think it's not so
obvious, especially because of Gallistel 's work, which is not accepted by most neuroscientists,
but seems to me pretty compelling. CRAIG
Can you talk a little bit about that because I haven't read that. And I, I'm guessing
our readers haven't our listeners haven't. NOAM
Gallistel is not the only one. Roger Penrose is another Nobel Prize winning physicist,
number of people have pointed out Gallistel mostly that have argued, I think, plausibly, that
the basic component of a computational system, the basic element of essentially a Turing
machine cannot be constructed from neural net. So, you have to look somewhere else with a
different form of computation. And he's also pointed out what in fact, is true that there's
much richer computational capacity in the brain than neural nets, even internal to a cell. There's
massive computational capacity, intracellular. So maybe that's involved in computation. And then
there's by now some experimental work, I think, given some evidence for this, but it's a problem
for neuroscientists to work on. You know, I'm not an expert in the field, I'm looking at it from the
outside. So don't take my opinion too seriously. But to me, it looks very compelling. But whatever
it is, neural nets or something else, there is some organization of them of whatever's there is
giving us the capacity to do what we're doing. NOAM
So, if you're a scientist, what you do is approach it in two different ways. One is, you try
to find the properties of the system. What is the nature of the system? That's first step, kind
of thing I was talking about before with stone, what are the properties of the system that
an infant automatically develops in the mind? And there's a lot of work on that. From
the other point of view, you can say, what can we learn about the brain that
relates to this? Actually, there is some work. So, there is neurophysiological neurophysiological
studies which have shown that there are artificial languages that violate the principle
that I mentioned, the structure dependent principle. If you train people on those, the
ordinary language centers don't function, you get diffuse functioning of the brain, which
means they're being treated as puzzles, basically. So, you can find some neurological correlates
of some of the things that are discovered by looking at the nature of the phenotype. It's
very hard for humans, for a number of reasons. NOAM
And we know a lot about human the physiology of human vision. But the reason is
because of invasive experiments with non-humans, cats, monkeys, and so on. Can't do that for
language. There aren’t any other organisms, it’s unique to humans. So, there's no comparative
studies, you can't do, you can think of a lot of invasive experiments, which teach you a lot. You
can't do them for ethical reasons. So, study of the neurophysiology of human
cognition is a uniquely hard problem. It’s in its basic elements like language,
it's just unique to the species. And in fact, a very recent development in
evolutionary history, probably the last couple 100,000 years, which is nothing. So, you can't
do the invasive experiments for ethical reasons, you can think of them but can't do them,
fortunately. And there's no comparative evidence. So, it's much harder to do, you have to
do things like looking at blood flow in the brain, MRI kind of things, electrical stimulation,
looking from the outside stuff. It's not like doing the kinds of experiments you can
think of. So, it's very hard to find out the neurophysiological basis for things like
use of language, but it's one way to proceed. NOAM
And the other way to proceed is learn more about
the phenotype. It's like chemistry for hundreds of years. You just postulated the
existence of atoms. Nobody could see them, you know, why are they there? Because unless
there are atoms with Dalton's properties, you don't explain anything. Or early genetics. Early
genetics were before anybody had any idea what a gene is. You just looked at the properties of the
system, try to figure out what must be going on. That's the way astrophysics works. Most
of science works like that. This does too. CRAIG
When you talk about invasive exploration that there are
tools that are increasingly sophisticated. I'm thinking of Neuralink, Elon Musk's
startup that has these super fine electrodes that can be put into the brain
without damaging individual neurons. NOAM
There's actually I think, much more advanced than that is work that's
being done with patients under brain surgery. Under brain surgery because the brain is basically
exposed there are some noninvasive procedures that can be used to study what particular
parts of the brain, even what particular neurons are doing. It’s very delicate
work. But there is some work going on. One person is working on it is Andrea Moro, the same
person who designed the experiments that are described before about impossible languages.
That seems to me a promising direction. NOAM
There are other kinds of work that is If we could mention some that Alec Marantz at
NYU is doing interesting studies that have shed some light on the very elementary function
how do, how do words get stored in the brain what what's going on in the brain that tells us
that Blake is a possible word, but Bnake isn't for an English speaker. It is for Arabic speakers. And
what's going on in the brain that deals with that. Hard work. David Poeppel, another very good
neuroscientist has found evidence for things like free structure in the brain. But the
kinds of invasive experiments you can dream of ,you can think of, he's just not allowed to do.
So, you have to try it in much indirect ways. CRAIG
Do you think that understanding cognition has advanced in your lifetime? And are you hopeful that we'll eventually
really understand how the brain thinks? NOAM Well, there's been vast improvement
in understanding the phenotype. That we know a great deal about that
was not known even a few years ago. There's been some progress in the neuroscience
that relates to it, but it's much harder. CRAIG
I'm just curious about where you are in not physically, you’re in Arizona, but where you are
in your thinking. Are you still pushing forward in trying to understand language in the brain? Or are
you sort of retired, so to speak at this point? NOAM
Very much involved in I mean, I don't
work on the neurophysiology. But I mentioned under Andrea Moro who is a good
friend. Alec Marantz. also a good friend, I follow the work they're doing, we interact, but my work
is just on the phenotype. What's the nature of the system? And there, I think we're learning a lot.
And we're in the middle of papers at the moment, looking at more subtle, complex properties. The
idea is essentially to find what I said about binary set formation, how can we show that from
the simplest computational procedures, we can account for the apparently complex and apparently
varied properties of the language systems. There's a fair amount of progress on that, unheard
of 20 or 30 years ago. So, this is all new. CRAIG
Understanding is one thing and then recreating it through computation in external hardware is another is that
a blind alley. Or do you think that that? NOAM Well, at the moment, I don't see any particular
point in it, if there is some point, okay. I mean, the kinds of things that we're
learning about the nature of language, I suppose you could construct some sort of
system that would duplicate them. But it doesn't seem any obvious point to it. It's like taking
chemistry and, 100 years ago and saying, can I construct models that will look sort of like a -
suppose you took a diagram for an organic molecule and studied its properties, you could
presumably construct a mechanical model that would do some of those things. Would it
be useful? Currently chemists didn't think so. If it would, okay, if it wouldn't, then don't. CRAIG
Nonetheless, I mean, we are using neural nets, even in this call. Do you
see? I mean, setting aside the question of whether or not they help understand, help us understand
anything about the brain? Are you excited at all in about the promise that these large models hold?
I mean because they do something very useful. NOAM
They are. Like I said, I'm using it right now. I think it's fine for me. Somebody who can't
hear to be able to read what you're saying. Yeah, pretty accurately. That's an achievement.
Great. That's indulgence. Technology. CRAIG
Who do you think is, is going to carry on your work from here? I mean, are there any students of yours who you
think we should be paying attention to? NOAM Quite a lot. There’s a lot of young people
doing fine work working closely with a small research group. By now spread all over
the world, we meet virtually from Japan, and other places recently working on the
kinds of problems I was talking about. Right now, I should say it's a pretty special
interest. Most linguists aren't interested in these foundational questions. But I think that's,
happens to be my interest, I want to see if we can show that, ultimately try to show that
language is essentially a natural object. NOAM
And there was an interesting paper, written about the time that I started working
on this by Albert Einstein in 1950. An article in Scientific American, which I read, but didn't
appreciate at the time, began to appreciate later, in which he talked about what
he called the miracle creed. It has an interesting history. Goes back to
Galileo. Galileo had a maxim saying, nature is simple. It doesn't do things in a complicated
way, if it could do them in a simple way. That’s Galileo’s maxim. Couldn't prove it.
But he said, I think that's the way it is. And it's the task of the scientists
to prove it. Well, over the centuries, it's been substantiated. Case after case. It
shows up in Leibniz’s principle of optimality. But by then there was a lot of evidence for
it. By now it's just a norm for science. It is what Einstein called the miracle creed.
Nature is simple. Our task is to show it. Can’t prove it. Skeptic and say I don't believe
it. Okay. But that's the way science works. NOAM
Well, the science works the same way for language. But I couldn't have proposed that
50 years ago, 20 years ago, I think now you can, that maybe language is just basically a
perfect computational system. At its base. You look at the phenomenon doesn't look like that.
But the same was true of biology. Go back to the 1950s 1960s. Biologists assume that organisms
could vary so widely that each one has to be studied on its own without bias. By now that's all
forgotten. It’s recognized that since the Cambrian explosion, there's virtually no variation in the
kinds of organisms, fundamentally all the same deep homologies, and so on. So, even been
proposed that there's a universal genome, not totally accepted, but
not considered ridiculous. NOAM
Well, I think we're moving in the same direction with the study of
language. Now, let me say, again, there's not many linguists interested in this. Most linguists, like
most biologists are studying particular things, which is fine, you learn a lot that way. But
I think it is possible now to formulate a plausible thesis that language is a natural object
like others, which evolved in such ways to have perfect design, but to be highly dysfunctional,
because that's true of natural objects, generally, it's part of the nature of evolution, which
doesn't take into account possible functions. NOAM
In the last stage of evolution, the reproductive success that does take function into account,
natural selection. That's a fringe of evolution, just peripheral fringe, very important
not denigrated. But it's the basic part of evolution is constructing the optimal system
that meets the physical conditions established by some disruption in the system. That's
the core of evolution. What Turing studied, Darcy Thompson others by now I think it's understood. And I think maybe the
study of this particular biology, language is a biological object. So why should
it be different? Let's see if we can show it. CRAIG
There's been a lot of talk in the news recently about, you know, extraterrestrial craft
having been found by the government. And you know, I don't put much stock in it. But imagine that
there is an extraterrestrial life, advanced forms of life. Do you think that their language
would have developed the same way if it's based on these simple principles? Or is it? Could
there be other forms of language in other biological organisms that would be quote
unquote, impossible, in the human context, NOAM
Back around the 1960s, I guess, Minsky studied with one of his students, Daniel
Bobrow, studied the simplest Turing machines, fewest states, fewest symbols, and asked
what happens if you just let them run free? Well turned out that most of them
crash, either get into endless loops or just crash, don't proceed. But the ones that
didn't crash or produce the successor function. So, he suggested, what we're going to find if
any kind of intelligence develops is that will be based on the successor function. And if we want
to try to communicate with some extraterrestrial intelligence, we should first see if they have
the successor function and maybe build up from there. Well, turns out a successor happens to be
what you get from the simplest possible language. The language is one symbol. And the simplest form
of binary set formation basically gives you the successor function. Add a little bit more
to it, you get something like original to add a little bit more to it, you get something
like the core properties of language. So, it's conceivable that if there
is any extraterrestrial intelligence, it would have pursued the same course. Where
it goes from there, we don't know enough to say CRAIG
And back to the idea that there is no supernatural realm, that the consciousness
is, is an emergent property from the physical attributes of the brain,
do you believe in a higher intelligence behind the creation
or continuation of the universe? NOAM
I don't see any point in vacuous hypotheses. If you want to believe it.
Okay. It has no consequences. CRAIG
So yeah, yeah. But do you believe it? NOAM
No. I don't see any point in believing things for
which there's no evidence and do no work. CRAIG
Yeah. And another thing I've always wanted to ask someone like you, clearly,
your intelligence surpasses most people's. NOAM
I don't think so CRAIG
Well, that's interesting that you say that. You think is just a matter of
applying yourself to study throughout your career. NOAM I have certain talents I know. Like, not believing
things just because people believe them. And keeping an open mind and looking for arguments and
evidence, not anything we've been talking about. When meaningless questions are proposed, like, are
other organisms sentient or do submarine swim? I say let's discard them and look at meaningful
questions. If you just pursued common sense, like then I think you can make some progress. Same
on the questions we're talking about language. If you think it through, there's every reason why
the organic object language should be an object. If so, it should follow the general principles
of evolution, which satisfy what Einstein called the miracle creed. So why shouldn't language. So
let's pursue that? See how far we can go. I think that's just common sense. Many people think
it's superior intelligence. I don't think so. CRAIG That’s it for this episode. I
want to thank Noam for his time. If you’d like a transcript of this conversation,
you can find one on our website, eye-on.ai. In the meantime, the Singularity may not be near, but AI
is about to change your world. So, pay attention.
Thanks for sharing this. I’m looking forward to viewing it/ knowledge from one of the most intelligent humans on the planet.