[MUSIC PLAYING] PETER NORVIG: Hi,
I'm Peter Norvig, director of research at Google. We're very excited
today to have with us Jeff Hawkins and Subutai Ahmad. Jeff is the founder and
inventor of the PalmPilot, the founder of the Redwood
Center for Theoretical Neuroscience, and
Numenta, a company for practical neuroscience. He's the author of
"On Intelligence," a 2004 book, and a
recent book this year called "A Thousand Brains, A
New Theory of Intelligence." And Subutai is the VP
of research at Numenta and a PhD concentrating in
computational neuroscience and machine learning. Jeff, can you tell us about
this thousand brains theory? JEFF HAWKINS: Yeah. Thank you, Peter. And it's a pleasure to be here. So, as you point out, I'm
here with my colleagues Subutai Ahmad. We're both going to
be speaking today. I'm going to speak
a little bit first, and he can speak a
little bit later. Let me just tell you
about what we do before-- what we've learned. And so we run a small research
company called Numenta. It's in Redwood
City, California. And we have two
research agendas. The first is a neuroscience
one, as you mentioned. It's to reverse
engineer the neocortex to figure out how it works,
what it does, and how it works. It's a very biological
neuroscience research agenda. The second research relates
to machine learning and AI. And we want to take the
principles that we've learned from studying
the brain and apply them to improving existing
machine learning techniques and ultimately building
true intelligent machines. So we've been at this
for about 20 years. And over the last
10 years, we've actually made quite
significant progress, first on the neuroscience agenda. I'm going to tell
you about today. It's pretty exciting. I think it's exciting,
the things we've learned. And about three years
ago, we really started applying some of those
principles to machine learning and laid a roadmap of AI. And Subutai's going to
talk about that work. I'm more the
neuroscience person. He's more the machine
learning person. But we can both stand in
for the other at times. But he's really heading up the
AI and machine learning effort. So we're going to do that. I'm going to give a 10-12
minute talk about what we've learned about brains. And then maybe I'll
take a few questions. And then Subutai
will talk about we're doing, about what we've done in
machine learning, which is also really exciting. And then we can open up for
a discussion and questions. If that sounds like a good
thing to do, we'll do that. All right, so I'm
just going to jump in. In a very short period
of time here, I'm going to tell you
what we've learned about how brains work, which
is, as I said, pretty exciting. So if you think about the
human brain, about 70% of it or about three quarters of it
is occupied by the neocortex. And it is the organ
of intelligence. So if you look at a brain,
a picture of a brain, you've all seen the
wrinkly thing on top. That's the neocortex. It is a sheet of neural tissue. It's about the size of
a large dinner napkin, and it's about 2 and
1/2 millimeters thick. It's responsible for
pretty much everything we think about intelligence--
higher learning, touch, anything you're
aware that you're perceiving is going on in the neocortex. Language, whether it's spoken
language or written language, creating it and understanding
it, the language of music, mathematics, physics-- all that is happening
in the neocortex. And pretty much every
high level cognitive function we think about as
part of the human condition, whether it's engineering,
physics, math, politics, whatever, that's all
happening in the neocortex. So understanding what it
does and how it does it is a pretty important component
of moving towards basically understanding who we
are and perhaps building intelligent machines. And so let's just
delve into it a bit. One of the most remarkable
things about the neocortex is that if you cut into
it, you slice it, and look at the 2 and 1/2
millimeter thickness, you'll see this incredibly
complex circuitry. There are many different
types of neurons that are connected and very
specific and complex ways and arranged in these
different layers. It's not like the kind
of neural networks we study today in
machine learning, which are much more uniform. This is a very complex
circuit that's in there. And what's really
remarkable about it is if you cut into the part
and look, you see the circuit. But if you cut into any
part of the neocortex, you see the same basic circuit. There's some small variations,
but it's remarkably preserved. In fact, if you cut into a rat's
brain or a dog or a monkey's brain and you cut
through the neocortex, you'll see the same circuitry. It's a kind of
amazing discovery. And first person who
made sense of this was a guy named
Bernard Mountcastle, a famous neurophysiologist,
who said, well, the reason that the circuitry
looks the same everywhere is that it's all
doing the same thing. That is, it does the same
intrinsic functionality to the neocortex for
everything it does. So he said if you took
a section of neocortex and you hooked it up to
your eyes, you'd get vision. If you hook it up to your
ears, you get hearing. If you hook it up to
your skin, you get touch. If you take the output of some
of these regions of the cortex, feed them into
other regions, you get high level
thought and language. This is hard to
believe, but there's a tremendous amount of empirical
evidence supporting it. And it's basically a fact now. This led to the idea of
what you might have heard called a common
cortical algorithm, meaning that there's this
common thing that's going on in the cortex everywhere. And so our research is
very much along the lines of answering three questions. What do the cortical columns do? How do they do it? And how do they work
together to create our perception of the
world and our intelligence? And as I said, we've made
really great progress on answering those
three questions. So let me just delve into it. I'm going to lay it on
you really quickly here. I'll keep it very high level. And so it shouldn't
be too hard to follow. It isn't that hard to
understand in conceptual levels. What you think about is each
of these cortical columns-- oh, I didn't tell
you how big they are. I should do that. They span the entire
2 and 1/2 millimeter thickness of the
cortex, and they're about roughly a
millimeter in area. So you can think of them
like little grains of rice. So your cortex is composed of
these little grains of rice stuck next to each other, and
there's about 150,000 of them. So that's what
we're talking about, this little grain of
rice type of size thing of which you have 150,000
of them in your head. There's about 100,000
neurons in each one of those little columns,
so it's complex. All right, so what does
this cortical column do? Well, the simplest way to
think about it is each one is like a little
miniature brain. Each one builds
models of the world. Each one gets input-- a process and input. Each one builds a model
of its sensory input. Each one actually
generates behavior. Every column in your cortex
actually generates behavior. And we say that they're
sensory motor models. And why do we call them
sensory motor models? Think about there's a
column that gets input from the tip of your fingers. So there's this column in
the tip of my finger here. And when I touch something
like this coffee cup, there's a sensation
when I'm feeling it, like an edge, a
little rounded edge. And that gets into the brain. But it's a sensory motor
because the columns actually know how my finger moves. So as I move my
finger over this cup, the column is being told
how the finger is moving. And therefore, it's
able to integrate both the sensation and
the location of the finger over time to build a model-- in some sense, a
three-dimensional structure of the cup as you move
your finger over it. Like, oh, there's a curve on
this area, and it's down here. There's another
area that's rougher. And there's a handle over
here and that kind of thing. So you might think a
column just getting input from the tip of the finger
isn't really very smart, but by integrating information
over time and movement information over time, it's able
to build models of the world. And model building is the
essence of intelligence. It's how we understand
the world and how we act upon the world--
we build models. So the surprising
thing about this is that every column in the
cortex is building models. That's not how most people
think about neutral networks. Most people hadn't thought
about the cortex that way. And so, we can then
ask ourselves, well, how does it do this? What are the methods
it does this with? And we describe it as internal
to these individual grains of rice that are getting
these inputs from your finger. Maybe I should
step back a second and say even vision
works this way. You're maybe not going
to think of it that way, but vision works this way too. When you look at something,
the columns in your cortex each only see an input from
a small part of your retina. It's like they're
looking at the world through a straw, a
very narrow straw. And so each column
then integrates-- as your eyes constantly move
around the world, integrates what they're seeing as
the eyes are moving. And it builds up these models. All right, so how
does it do this? Internal to each column are
what we call reference frames. You can think of
a reference frame like the Cartesian coordinates
you learned in high school, you know, x, y, and z. It's a way of
structuring information. So literally, when you
touch or see something or hear something, your
brain is building this sort of three-dimensional
model of the things as the sensory movements
are occurring over time. And it's assigning knowledge
in a reference frame. It's saying, here's a
structure for this thing, and I'm going to assign what
I'm sensing to different points of locations in that structure. That's a little different,
but it's kind of like that. And so it builds up this model
of things as you touch it or as you move your eyes around
and rotate things in your head, things like that. And so now we have these 150,000
columns all learning models. Some are learning models from
the input from your eyes, some from the ears,
some from the skin, some from other
parts of the brain. So now, you want
to ask yourself, where is knowledge of
something stored in your head? If I ask myself, where is
knowledge of this coffee cup stored in your head? Well, it's not
stored in one place. There isn't a single model of
this coffee cup in your head. There are thousands of models. There are hundreds of thousands
of models in your visual areas of your cortex. There's hundreds of
thousands of models in the somatosensory
areas of your cortex. You even have models of
how coffee cups sound as you use them or sounds
they make when they're filled with liquid and
not filled with liquid, with things like that. And so we call this the
thousand brains theory. It's not that every
column in your cortex learns models of everything. That's not true. There's 150,000 columns,
and maybe a few thousand of have learned columns
with coffee cups. But it's 1,000 brains
because these sort of independent modeling
units are occurring at the same time, which leads
us to the next big question-- how do they work together? Why do we not feel like 150,000
little brains, you know? And this occurs
because the columns, they talk to each other. There's these long
range connections that go across the
entire neocortex, where the columns
communicate with each other. And essentially, what
they do is they vote. Imagine I have this
coffee cup and I'm now touching it with multiple
fingers at the same time. And if I grab this
coffee cup, I may not have to move my fingers
to recognize what it is. I reach my hand in a box
and I grab this thing, go, oh, I know
it's a coffee cup. What's going on there? Column that's getting part
of input from this cup doesn't know what it is. These columns are saying,
I'm feeling an edge. I'm feeling a curve. I'm feeling a flat surface. And here's where it is. Here's where it might be. And they vote together during
these long range connections, and they reach a consensus. And they say, OK, the only
thing consistent with what we're all sensing right
now is this coffee cup. So that is what we're
going to say the answer is. It's a coffee cup. And so these long range
connections across the brain form these representations
of what the objects are. And that's really all
you're worried about or you're aware about. Vision is the same thing. When you look at something
like this, you can say, oh, I can flash
this image of a dog or cat in front of your
face and say, what is it? You can answer it. And the reason is because
individual columns all have different
models of the things they're trying to
understand and all get a different
part of that input and they vote to say
what is consistent here, what is the only thing
that's consistent? So you have these
thousands of models that are voting to reach
consistent hypotheses about the world. And they do this with these
long range connections. And you are only perceptually
aware of the voting. You're not aware of what's
going on underneath. So normally, when you're
looking at something, your eyes are moving about
three times a second. And you're not aware that
the input in your brain is changing constantly. The columns are all
processing information, changing over time
as you do that. But the voting neurons are
reaching the same consensus-- I'm listen to this guy,
Jeff Hawkins, talking. Even though my eyes are
moving over his head, I don't really see that. So that's how you perceive
the voting neurons. So that's the basics of this
theory, what we discovered, is that the cortex is
this huge modeling system. It's built up of many, many,
many thousands of models. Each one of those
models is working on the same basic principle. They're not really
doing different things. There's not a
different algorithm for vision and hearing and
touch or anything like that. They're all using the same
algorithm and that they vote to reach a consensus. Two of the things
I want to mention about how the details
of these things work, because when we talk about the
relevance of this work for AI, like, do we care how
the brain does this? Is it important how
the brain thinks and how it learns
about the world? You might argue that's
maybe not that important. But I would argue
it's very important. At least we have an example
here how it does this. And there's principles
we can learn. And those principles, we
can decide whether or not we need them or not need them
or how we'd implement them differently than a brain. So there's two more principles
I want to talk about and a little bit more detailed. One of these has to do
with the way neurons work. So neurons are the
cells in the brain. We have about 18 billion
neurons in your neocortex. And as we typically model
them in machine learning, they're very simple structures. They're called a point neuron. It's like a little
circle and a whole bunch of inputs come into it. But real neurons aren't
like that at all. Real neurons have this complex
structure called dendrites. It's like a tree. You've probably seen pictures of
with these, branches of a tree coming out of each cell. And most of the synapses are
arranged along those branches on the dendrites. Well, we now understand what's
going on in those dendrites and why they're there and
how they process information. In fact, most of the processing
that goes on in your brain actually occurs inside
the dendrites of a neuron, not between neurons. And most of the synapses are-- these connections
are on the dendrites. And the simplest way
to understand this is that these dendrites allow
the neurons to represent something in different contexts. I won't explain how
it does that here. But imagine I have some input. I want to represent that
input in different contexts. I'm seeing a dog. It's my dog in my living room
doing something I'm expecting it to do at this time of day. The brain constantly has to
provide context for everything it's doing. And these dendrites do that. They're a very important
component in how it works. And the last thing I
want to talk about, one more essential property-- I'm going even deeper
now into neuroscience-- is something called sparsity. If you were to look at the
neurons in your brain-- and let's say just
look at 10,000 of them are just sitting there
representing something, the group of 10,000 neurons. Typically, you would only see
1% or 2% or 3% of the cells active at any point in time. Most would be quiet, silent, not
doing anything, and maybe 2%, or say, 200, are active. And a moment later, a
different 200 are active. A moment later, a
different 200 are active. This is the way the brain works. If all the neurons in your
brain become active at once, it's called a seizure. So we don't want that. Now it's different
than how we typically do artificial neural
networks, where all the neurons are somewhat
active at any point in time. But in the brain,
it's not like that. And there's another type
of sparsity, which is called connectivity sparsity. If I have two groups
of neurons and they're connected to each
other, we typically do that in machine learning
by connecting all the neurons to all the other neurons. But in the brain,
you don't see that. You see a very
sparse connectivity. Now, I mention all this
because these are actually the properties we
think are absolutely essential for creating
intelligent machines and for creating an
AI, artificial general intelligence. I'll give you just a brief hint
at why these properties might be important. Take activation sparsity. Often, in the brain, we are
not certain of the answer to something. We're not sure what
we're looking at. We're not sure what's happening. We're trying to guess
what's going on. So we have some
kind of uncertainty. A mathematician would
represent uncertainty using perhaps a
probability function. They'd say, oh, well, there's
a probability it's this and x probability it's that and
so on, and they add up to one. That's what probabilities do. The brain doesn't
work like that at all. It turns out when you use
sparsity, sparse activations, the brain can represent multiple
hypotheses at the same time. So let's say I'm using 200
neurons active to represent something out of 10,000. So I have 200 active
out of 10,000. It turns out you can
activate five or 10 patterns. And so you might have a
couple thousand neurons active at the same time. And you think that would
make a big muddled mess, but it turns out it doesn't. It turns out because the
brain works on sparsity that all 10 hypotheses can be
processed at the same time. It's a different way of
handling uncertainty. The brain is
constantly processing multiple simultaneous
hypotheses at the same time, and nobody gets confused. And it's only because they
use sparse representations. So this is like a fundamental
information processing idea, like binary digits in computers. Yeah, so we think these
things are essential. So I'm done. We've made a lot of
progress on studying how the neocortex works. We haven't figured it all out,
but we have the big picture. We have a lot of the details. There's more details
to be figured out. But it's allowed us to
sort of lay out a roadmap, like OK, I have a good sense
of what intelligence is and how the brain does this. And we can start building this
stuff into machine learning. I should point out
that everything I'm talking about here has been
published in scientific papers. And it's also discussed in the
book that I wrote recently, "A Thousand Brains." But this was like the
shortest introduction I think I could give to you. So I'm done with my part there. PETER NORVIG: Thanks, Jeff. That was great. I did have a question about
how the neocortex works. So you mentioned your
dog in your living room. And presumably, that dog's
got the same kind of neocortex and the same kind of
little grains of rice, but they're never
going to speak English. So something
different is going on. And then, on the
other hand, we see all these things
of crows and ravens doing really smart stuff, but
they've got a pallium and not a neocortex. JEFF HAWKINS: Yeah. PETER NORVIG: And
BPR and the comment says, even jumping spiders have
plans and memories and so on. So what's going on there? JEFF HAWKINS:
Yeah, so let's talk about the second part
of your question first. So it turns out birds don't
have a proper neocortex. But they do have these
things called blobs. That's the technical
term sometimes they use. And it's become clear recently--
there's a lot of evidence-- actually, the same
neural mechanisms are going on in those blobs
that are going on in the cortex. It's possible that
nature has discovered multiple ways of building
models of the world. I'm sure it has. But the way that's
going on in mammals is a really powerful way. And it would have
evolved a long time ago because it basically allows us
to move around in the world. And so I would
suspect any animal like a bird is going to have
the same basic mechanisms even though they may not be, quote,
equivalent to cortical columns, and they may not be
an equivalent cortex. In fact, the mechanisms
that we think are going on in the neocortex,
the very specific mechanisms, were first discovered
in an older part of the human brain called
the entorhinal cortex and the hippocampus. And there aren't proper
cortical columns there, but these cell types
and the circuitries exist in a different form. It's like what
nature did is just discover these neural processes
that allow us to build models of the world and then
just repackage them in different ways. And then when it came
to the mammal neocortex, it found a very
efficient packing scheme and said, oh, I can
make a lot of those now really quickly just
by replicating this. So I don't know about
jumping spiders. They may have a totally
different way of doing it. And I wouldn't say the jumping
spider isn't smart or not. I say intelligence has
to do with learning a model of the world
and using that model to act upon the world. It's not about being able
to solve particular tasks. The jumping spider may have
genetic algorithms that tell it exactly how to do what it does. I don't know enough
about jumping spiders. I don't know anything
about jumping spiders. But if an animal can learn
models of the world-- there may be other
ways of doing it, but this is the way
that mammals do it. And I think it's the same
way that birds do it. And it could be another-- it's probably an evolution
of a very old mechanism even though it's been packaged
differently in a human. Now, the first part
of your question had to do with
dogs and language. Really, it had do with language. And I did address
this in the book a bit because language
is an odd thing. First of all,
language only appears on one side of your brain--
the left side, which is unique. It's almost the only
thing that's like that. And so, why is it unique? Does it work on
different principles? Well, if you look
at the neural tissue in the regions of the
cortex that do language, they look a lot like the
neural tissue elsewhere. I've heard two good hypotheses
why humans have language and other animals don't. I don't know if
either one is right, but I'm happy to
share them with you. [LAUGHS] One has to do with-- language requires a very
fast processing, much faster than most of the things we do. And if you look at the
language areas of the brain, there's extra insulation
called myelination, which allows them to operate faster. And that's the
hypothesis why it occurs on-- one of the
hypotheses why it occurs on one side of the
brain and not the other. And that insulation is
expensive, biologically expensive, so you don't
want do it everywhere. And so that's one hypothesis. Another hypothesis is-- which
I also think it's interesting, is the cortex, to
create language, you have to be able to control-- the cortex has to be able
to control certain parts of your musculature. The lungs, the voice box,
the mouth, and the tongue have to be all very tightly
controlled by the cortex. And there's some evidence
that the pathways that come from the cortex to the rest
of the body in other animals do not project in the same
ways, that other animals are not able to move their voice
box because the cortex is physically not connected to it. And, at least in
humans, that pathway is developed very
strongly in humans. So I don't know the
answer to this question. But the evidence we have so
far does not suggest language is fundamentally different. It's going to be
different in quantity or different in
certain attributes. There may even be an extra
cell type or something. But if you look at the
anatomy of the cortex that controls language, it's almost
identical to anatomy elsewhere. You can also make an argument
about the structure of language is similar to the
type of structure we see in objects in the world. It's this hierarchical
recursive structure. And these neural
circuits can do all that. So that's the best I
can do on that one. [LAUGHS] PETER NORVIG: Thank you. And Subutai, you're
going to tell us how machine learning
fits into all this. SUBUTAI AHMAD: Yeah,
thank you, Peter. Yes, so I plan on taking
about 5 or 10 minutes as well to kind of describe
the details of our research roadmap. Our approaches at
Numenta is quite unusual and really exciting. So I'll make sort of one
high level come in first. Our process here
is to look at kind of different elements of
the thousand brains theory that Jeff described and the set
of the fundamental capabilities that we know have to be present
in general intelligent systems. And then, for each of
those capabilities, we try to understand
what can we learn from the neuroscience at a
very basic mechanistic level that we can actually
implement as algorithms. And we're not trying to
match a specific neuroscience experiment or try to explain
some sort of high level property or manifold
or anything like that. We're trying to extract sort of
fundamental algorithmic lessons that can be incorporated
into a coherent system, taking neuroscience as a set
of constraints and mechanisms. So it's very much a
computer science approach. Now, there's a ton of
fantastic research going on in deep learning today. And as a small research
lab at Numenta, we try to focus on a
specific set of capabilities that we think can solve big
problems with state of the art deep learning systems
today and where we can learn from the neuroscience. So let's get into it. So using the 1,000 brains
theory as a framework and taking sort of all
this stuff into account, I'm going to describe three
aspects of our roadmap. And I'll kind of go in the
reverse order that Jeff went. So I'll talk first
about sparsity. So that's a fundamental aspect
of our research roadmap. So Jeff discussed that the
brain is really sparse. Very few neurons are actually
active at any point in time. It's somewhere around 2%
or less of the neurons in the neocortex are active. And even when two sets of
neurons project to one another, the connectivity between them
is also extremely sparse. It's somewhere around 5% of
neurons are actually connected. So most of the neurons are
not active, and most of them are not connected. This is extremely sparse-- much, much sparser than what we
have in typical deep learning systems. And the question is, is
this just happenstance? Or is there an important
benefit to sparsity? And it turns out there are
actually several benefits. Jeff mentioned one about
being able to represent multiple hypotheses
simultaneously. I'm going to talk about
two that we really looked at in the context of
machine learning systems. The first pretty obvious
one is efficiency. When things are sparse,
when they're silent, they're not consuming power. And we all know today
that deep learning systems consume a huge amount of energy
and are incredibly inefficient compared to the brain. The neocortex actually
only uses about 40 watts of power, which is incredible. It's like a little light bulb. And by incorporating sparsity
into deep learning systems in the way that it seems to be
implemented in the neocortex, we've actually been
able to recently show that it's possible to improve
the efficiency of deep learning systems by several
orders of magnitude. So if you look at convolutional
layers and linear layers, we can actually improve
efficiency by over 100 times-- two orders of magnitude. We did this on FPGAs,
where we can directly control the circuitry
and look at things at a very detailed level. More recently, we've
actually now started to see that we can
replicate this on CPUs and then potentially
even on TPUs and GPUs. Now, for those
latter systems, they might need some new circuitry. And they'll need to
evolve towards supporting sparsity more inherently. But we think there's
a tremendous amount of promise to this. And we're starting to understand
at a very detailed circuit level what's required to sort
of fulfill kind of efficiency promise with sparsity. Very recently, we've started
scaling some of these to ImageNet-sized data sets
and transform our architecture. So we're pretty confident
that the core principles will actually apply to even some
of the large scale networks that we're using in
machine learning today. Another property of sparsity
is that sparse vectors-- so these are very
high-dimensional mathematical vectors that are mostly zero-- actually minimally
interfere with one another. When you have sparse
representations, they don't collide
much with one another. And because of that, it actually
looks like sparse systems can be far more robust
to noise and very robust to perturbations
compared to typical DNN systems. We can sort of characterize
this mathematically. And we've shown in some
experiments a couple of years ago that if you just add
random noise or perturbations, these sparse systems can
be a lot more stable. So we think robustness--
when we think about building systems
that are not brittle, we think sparsity should be one
of the core components of that. So kind of summarizing the
first aspect of our roadmap-- we think sparsity is going
to be critical for scaling AI systems. It's really the only way we're
going to scale to really brain scale in much larger
systems and also going to be critical for robustness. A second aspect of
our research roadmap has to do with perhaps
the most fundamental thing in neuroscience and
deep learning, which is the neuron itself. And Jeff described a little bit
about the dendritic branches and how they incorporate
context and so on. And I think this is one of the
most underappreciated aspects of neuroscience,
just how complicated an individual
neuron actually is. And the neurons we use today
in deep learning systems are simple point neurons. They just take a linear
weight of some of their inputs in a nonlinearity. But real biological neurons
are nothing like that. And I think in machine learning,
researchers sort of know this. But the prevailing
viewpoint is that, OK, if we just add more
and more parameters and just make the
system bigger, we can sort of make up
for the increased complexity of neurons. And that's not true. There are important functional
properties of real neurons that we should consider. Real neurons have complex
temporal dynamics. They actually have a diversity
of different learning rules depending on where
in the dendrites you are and depending
on the situation. They have sophisticated
heterogeneous morphology and structure. And we think these
properties are actually going to be important
to incorporate in intelligent systems. Very recently, we've been
looking at how we can do that. We think some of
these properties are going to make neurons and
networks amenable to continuous learning, so the ability
to continually learn new things without forgetting
what's happened in the past. We have shown in
some of our papers that biological neurons are
actually constantly making predictions and they're
learning from mistakes in their predictions. This is a very different
learning paradigm from the typical kind of
supervised learning paradigm that we used with back
propagation today. And all these learning rules
are actually completely embedded within the neuron. There's no external
homunculus or system that's computing some sort
of a global loss function. And there's no sort of
rigid back propagation phase that's going on globally
throughout the network. So understanding
these kind of details will lead to the
ability to develop these continually
learning, completely self-supervised systems. They will have completely
local learning rules, and therefore can scale really,
really well in hardware. And very recently, we've shown
that by augmenting the neurons we use in deep
learning, we can make them sort of closer
to biological neurons by incorporating
context and sparsity and updating the learning rules. We can mitigate
to a large extent some of these issues
around catastrophic forgetting that you see in
sort of classic deep learning systems. We're hopeful that we
can get to a purely local unsupervised
learning system as well that's as
good as systems that are trained via
end-to-end back propagation. So this is all
research in progress. We definitely have a ways to go. But incorporating these
essential properties of real neurons is a second
really important part of our research. The third thing I wanted to
highlight is reference frames. So Jeff described this earlier. It's an area that
actually Jeff Hinton has been thinking about
for over 40 years. There's a paper by him back
in 1981 that discusses it. And I think it's
actually really fun to go back and read
those old papers. But as Jeff Hawkins mentioned,
with the discovery of grid cells and place cells
in neuroscience, we understand a lot more
about how reference frames are implemented in the brain and
how critically integrated it is with movement
and behavior. So from a deep learning
standpoint again, going back to the machine
learning and practical side, incorporating this is
going to be critical. Reference frames
essentially allow us to create a single
invariant structure, or a stable structure,
that completely describes an object or a
concept in a manner that you can actually
navigate and manipulate. So a simple example is-- imagine I show you
a strange new car that you've never seen before. With a single image,
you can instantly create a representation of it
based around reference frames. And you can imagine
immediately what it would look like from the other side. You can imagine how it would
feel and how it would sound. You can tell immediately, OK, is
it going to fit in my garage-- is it going to fit
in your garage, because you have a reference
frame car for your garage too, and you can relate these
two reference frames. You could probably
imagine different ways you can move the car
into your garage as well. So all of these things
sort of inherently come from this structure. And by incorporating reference
frames and these properties into machine
learning and creating these invariant
structures, we think we'll be able to dramatically
increase the generalization power of our systems and
dramatically lower the number of training examples
that we'll need. These systems will be able to
plan and naturally integrate behavior, because moving
around the reference frames is an essential part
of how they're created and how we learn about
structure in the world. It's going to be
critical for robotics. And since these systems also
have the other properties I mentioned, they will
also be power efficient, continually learning, and so on. So we still have a ways to
go on this side as well, but this is a very sort of
interesting and active area of our research on that. So I've sort of covered
three different aspects here. I discussed sparsity
and how we can use that to get dramatic
efficiency gains on hardware and improve robustness. I talked about the importance
of the neuron model itself and how that can
lead to continually learning systems that can learn in a
local, self-supervised manner. I talked about reference
frames and grid cells and how we can get
dramatically better generalization, integrated
sensory motor processing, and smaller kind
of training sets. If you kind of step
back, one big theme maybe I want everyone to kind
of take away from this roadmap is that each of these are
not just point solutions. We're not trying to solve each
of these problems independently of the other. Instead, what's really exciting
to me is that from neuroscience it actually tells us that
these are all components of a big integrated framework. So when you think of a
real-time, autonomous intelligent system embodied
in some environment, all of these properties are
going to come into play. And ultimately,
you can ask, do we need to use brains to get
to intelligent systems? Ultimately, I think
that's where the hope is. The brain provides a
concrete existence proof that these algorithmic
components can be put together into a single working
intelligent system that's consistent. The whole theory
of cortical columns shows that there's a common
microcircuitry and common architecture that
if we implement-- and through this model building
process, we can scale this-- that single system
is going to be able to learn a
diverse range of tasks. It's going to be able to learn
these continuously in a very, very power-efficient way. So when we look at
the neuroscience, these are the sorts of
things that really drives me. And that that's
what we're trying to work on it at Numenta. So hopefully, that gives
you a brief sense of-- I went very quickly, and I'm
happy to take more questions on the details. But that gives you a sense of
kind of how we approach things and how we're
taking neuroscience into our research roadmap. PETER NORVIG: That's great. And of course, there's
so much going on now in deep learning research. And I'm thinking, you
remind me of a lot of things that seemed
like they align and maybe some things
that don't, right? So at Google, we
have this MUM model, which is multimodal, multitask,
multi-language bringing in video and robotics and so on. So it seems like that's
aligned with your direction. We've got these
switch transformers that do voting and have large
portions of the networks turned off for conserving energy. OpenAI had a sparse toolkit. We have a similar kind of
sparse toolkit called RigL. LeCun's been doing these
dense-to-sparse pruning. And you mentioned some
of the Hinton work. When you look at all
this, what do you see that's aligned
with your direction? And what do you see
where maybe they're missing out from the
direction you're going? SUBUTAI AHMAD: Yeah, I
think this-- you know, we look at a lot of that stuff. I think that stuff's
really exciting, and we can learn
from those as well. I think many of those
concepts are very much aligned with some of the stuff
that we learned from the brain as well. I think what we get,
again, from the brain is a lot of detailed mechanisms
and a very consistent sort of integrated structure. These are, again, not individual
sort of point solutions. They all have to
somehow work together. And there's not going to be
too many ways of doing it. And so the brain gives
us an existence proof-- here's something that
we know is working. And the field of neuroscience
today is exploding. There's so much information
and data coming out of it that it gives us a set
of constraints and a view into a really detailed
structure that we know works, that we could try
to reverse engineer. And hopefully, we
can take the best of what's done in
deep learning, take all of the stuff we know from
neuroscience, and all of those will be sort of important in
creating intelligent systems. JEFF HAWKINS: Yeah, I'll add
onto that too a little bit. The way I view it
is people who work in machine learning
and AI, we all want to achieve somehow
the same result in the end. And I don't think there's going
to be multiple ways of doing it, just like there aren't
really multiple types of Turing machines. You know what I'm saying? There's variations on
a theme, but there's going to be some common
principles that we use in AI in the future. We're all trying to get there. The question is, do you
need to study the brain to do that or not? Can you just get there with just
positing and doing other ways? Now, I don't think anyone
can answer that question. We've always felt
like the quickest way to get there is by
studying the brain. But I think all
these ideas are going to coalesce at some point. That's my point, Peter. I don't think we're going
to end up in the future, there are five different ways of
building intelligent machines. I don't think that's right. I think it's going to be more
like computers, where we have one set of fundamental ideas
that can be implemented in different fashions
and different variations on a theme. So, to me, it's less
of a competitive world. It's more like, we're all
trying to get to the same place. We can all bring different
things to the party. I think by studying
the brains, we've got a really deep understanding
of many of these principles that-- even taking Jeff Hinton's
capsules, it's similar. But we can see that he's
missing the motor component out of it and other things. And so, we know that
has to be added. PETER NORVIG: And going
back even farther, I was a grad student
in the 1980s. And we had Minsky's
society of minds model. Is that related to
the thousand brains? SUBUTAI AHMAD: You know, I
was at grad school in the '80s as well, late '80s when
back propagation was just coming out. I think the society
of minds is-- I mean, I think it's
actually very, very different from the thousand brains theory. You know, the
society of minds, you had tons of really small,
very special purpose bots that would sort of work
together pretty well. What we learn from
neuroscience is that it's not like that at all. There's a single sort of
consistent microcircuit, like cortical column,
that's not simple. It's somewhat complex. But then it's repeated
150,000 times. And it's extremely
general purpose. It can learn anything
that we as humans learn. It's not designed to do
any one specific thing. And it's a learning system. It's continually learning. It operates on reference
frames, all the stuff that we talked about. So when you look at a
very, very high level, it might seem similar. But when you look
at the details, it's actually diametrically
opposite, I think. I don't know, Jeff, if you
wanted add more on that. JEFF HAWKINS: I agree. I agree. Yeah. I mean, I was excited
when that book came out. And then I was
disappointed, because it was like, well, there's a lot
of ideas but no mechanisms and no biology and no-- I was like, ahhh. And it was all these
different things. It was like, OK, you can
have picked these 100 things. Maybe you could pick
another 100 things. But what's really
amazing about the brain is we have this common
algorithm that does everything. And now we understand
why it's so powerful. It's just a general
purpose modeling system, assuming that you
have something that has sensation and movement. And after that, you
can learn anything. Well, anything that's
learnable, I guess. We don't know what we can learn. PETER NORVIG: OK. Let's see. Well, let me take
a chance to do-- while I've got
Jeff here, there's one question I
always wanted to ask. And then we'll go to
the audience questions. So Jeff, when the
PalmPilot came out in 1997, the first
successful digital assistant, some people then worried
maybe the screen is too small and the tiny little
keys are too small. And other people said, oh,
the portability, that's really awesome. Now, if you told me then that 20
years later an assistant would have no screen, no keyboard, no
portability, because it's just a speaker that has to be
plugged into the wall, I would've said you're crazy. And yet, that device
sells pretty well from several manufacturers. So how did personal
assistants get here? And where they going? And when are we really going
to be able to talk to them? JEFF HAWKINS: Yeah, you
know, the good thing about this Peter-- some people know the story. My first love was neuroscience. And I was a graduate student
at Berkeley in the late '80s. And I found out
rather surprisingly that it wasn't possible to be a
theoretical neuroscientist back then. That wasn't a career path. You could not do it. We can go into why. And so I ended up going
into computer science as a temporary job. I said I'll go work on
computers for four years, and then I'll come
back into neuroscience. And that's when we started Palm. And the whole thing just-- and, of course,
when I got started, I realized, oh my God, everyone
in the world's going to-- I knew this. I knew billions of people
are going to own computers in their pockets. I said, oh my
gosh, this is going to sweep the computer industry. I was very clear by 1989-1990. And so I got really
into that, right? And we had a lot of success. But at some point
along the path, I said, I want to go back to brains. And so I left. I just got up and left. Everyone knew it. Everyone knew that's
not what I wanted to do. And as important as it was,
I felt that studying brains was more important for
the future of humanity. And so, now your
question to me is-- the shorter answer
to your question is I don't think about
that stuff at all. I don't think about
mobile computing. I don't think about
digital assistants. I don't think about whatever-- OK, Google or Hello
Siri or Alexa. [LAUGHS] I just don't. In fact, I'm kind of a
Luddite in some ways. It's like, I don't have the
audio digital assistant. And I'm not really
a gadget-y guy. PETER NORVIG: OK. But-- SUBUTAI AHMAD: As
someone who discovered the personal calendar,
he's actually really bad at calendars. PETER NORVIG: Someday,
we'll be talking to one of your intelligent
digital brains. JEFF HAWKINS: Yeah, yeah, yeah. PETER NORVIG: But we'll give you
another decade to get that one. JEFF HAWKINS: [LAUGHS] I'm
just focused on brains and AI, you know? PETER NORVIG: Let's go to some
of the audience questions. JEFF HAWKINS: Let's see. I'm looking at them too. PETER NORVIG: OK, we've
got one from Marc Weiss. JEFF HAWKINS: Oh. PETER NORVIG: Any
major new insights after the book was published? JEFF HAWKINS: Well,
full disclosure, Marc is someone we know very well. He's actually an
investor in Numenta. So let's just be honest. Hi, Marc. Well, discoveries that
we've made since-- what was the question again? I'm sorry. SUBUTAI AHMAD: Since
the book was published. JEFF HAWKINS: Since
the book-- well, the book came out in
March, so not a lot. You know, what has
happened is that some of the predictions in the
Thousands Brain Theory-- one of the sort of
major predictions which I don't think anyone
would have anticipated is that throughout
the neocortex, in every cortical column, you'd
see a grid cell's equivalent. These cells that
we know about exist in other parts of the brain. So part of our
theory is like, hey, those mechanisms in the
old part of the brain are replicated in the cortex. And we're starting to
see papers come out where they're finding this
in primary visual and primary sensory cortex. So this is sort of
the key underpinning. That's not a discovery. That's empirical
evidence supporting it. In terms of discovery,
I think there's one idea on the neuroscience
side that I'm working on, which is pretty cool. I haven't published it
yet, so it's not anywhere. But it's a little
bit more detailed. It's like, how does the brain
know how everything is moving? How does the brain know
where your finger is and how it's moving
through the world? And how does it translate
that from a egocentric to a body-centric to an
object-centric reference frame? And I think I've made some
pretty big discoveries on that. So I'm writing up a
paper about that now. And Subutai, you might
answer that question too, because, every week, it seems
like we're making progress on the AI machine
learning stuff. It's like, I can't
keep up with it. I don't know if there's anything
you want to say about that or not. SUBUTAI AHMAD: Yeah, I
mean, we're constantly surprised as we start
to incorporate stuff into deep learning, how
compatible some of these things can be. And initially, it's
not obvious that you can take some of
these properties and implement it in a
deep learning system, but we're able to do
that and progress. And ultimately, I
think we're going to end up with
something that looks very different from today's
the deep learning system. But you can actually
make incremental progress implementing the
neuroscience approach. And that's kind of interesting. JEFF HAWKINS: Yeah,
I'll throw out one-- SUBUTAI AHMAD: We never
expected that at first. JEFF HAWKINS: I'll throw out
one teaser, because I'm not going to-- it's a teaser,
because we're not going to tell you how we did this. But we've recently
figured out how to get a lot of this stuff
to work on CPUs, which most people don't
think CPUs would be very good at this stuff. So I'll leave that as a teaser. [LAUGHS] PETER NORVIG: I do know
there's a couple of companies that are looking at using
CPUs for deep learning. JEFF HAWKINS:
Yeah, well, there's a huge neuromorphic computing
industry that people feel-- I don't know if it's
an industry yet-- where people are doing
some radical new designs, like Rain Neuromorphic
I think is the name. And then-- PETER NORVIG: Yeah. JEFF HAWKINS: --people
trying to figure out how to enhance CPUs and GPUs. Everybody's racing to do this. Everyone was caught with
their pants down a little bit. Nvidia sort of took over
the AI computing world and all the other
players are trying to figure out how to catch
up and leapfrog them. PETER NORVIG: Yeah. OK, let's go to
another question. You mentioned that
cortical columns perform independent
frame-of-reference transformation operations. Have you performed
experiments to confirm this? JEFF HAWKINS: We
don't do experiments. We're like a theoretical group. So theoretical physicist
versus experimental. We work with experimental labs. We have lots of collaborations. We have visiting scientists. And so, as I
mentioned, a theorist can't tell experimentalists
what to do. [LAUGHS] They don't
want to hear that. But we do find people
are testing our theories one way or the other. I did mention this. Some new research
which just came out at the beginning of
this year in January. The people were-- and
they're citing our work. But I imagine those
experiments that came, they were started earlier. Maybe not. I actually don't know. But where people
are-- in some sense, even if they're not explicitly
trying to test our theories, they are. And they're aware
of our theories. And so, when they get these
results, they come back and say, yes, this
is compatible. This is what you predicted. So that's happening
at its own pace, but we don't do any kind of
empirical experimental work in our-- PETER NORVIG: What's the
coolest experimental result you've seen? JEFF HAWKINS: What do
you think, Subutai? SUBUTAI AHMAD: Yeah, there's
the border ownership cells potentially, where-- JEFF HAWKINS: Oh, that
was interesting one, yeah. SUBUTAI AHMAD: Those are
pretty interesting, where-- you know, you typically think
of neurons as representing, let's say, some feature
at a point in your retina, let's say. But it turns out that even very
early on in your visual system, neurons actually
respond to features that are relative to the
location on the object itself. And that's even in primary
visual cortex in V1, that the very earliest
stages of processes are sort of equivalent to kind
of the first convolutional layer in a network. You actually still see some hint
of allocentric representations. JEFF HAWKINS: This
gets back to the-- we were talking earlier
about context, right? So these cells-- SUBUTAI AHMAD: Yeah. JEFF HAWKINS: --like, if it's
detecting a vertical line or edge or something
like that, it'll say, well, if it's the hind leg
of a dog, it's going to fire. But if it's the foreleg of the
dog, it's not going to fire. I mean, it's that kind of-- SUBUTAI AHMAD: Yeah, even
though it's the same angle as-- JEFF HAWKINS: It's the
same input to that column. So that just tells you
that the column is smart. One argument could be
saying, oh, somebody else is telling it from
elsewhere in the cortex. But our theory says,
no, no, that columns knows what this thing is. SUBUTAI AHMAD: Yeah,
and they've actually ruled out the
possibility of feedback from above, because
it happens so quickly. There just isn't time for
information to propagate. It has to be computed
locally within that column. JEFF HAWKINS: Yeah,
so the thing that I-- I don't want to put too
much effort into this, because experimental
results have to be vetted. They have to be reproduced. And so, when someone comes
up with a new result, you have to be patient. You have to wait a while to
see if it's reproducible. But as I did
mention, I was really thrilled to see people
finding evidence of grid cells in primary visual
and primary somatosensory cortex, which is a key
prediction of our theory. It doesn't add new
insights to it. It's just like, yeah,
it's nice to have these sort of supporting
evidence coming out. PETER NORVIG: All
right, another question. Are there applications to
health care and life science? JEFF HAWKINS:
[LAUGHS] Well, yeah, but we're working at a
different level, right? We're working at the fundamental
algorithmic processing levels. One other thing
that's interesting that's come out since the
book came out in March is we've had people reach out
to us from different fields who say, hey, this is
helpful in my field, whether that's
pedagogy-- you know, it's the science of teaching--
well, this is really helpful-- or someone who does
psychiatric diseases-- this is really helpful. And so people are trying
to apply the general theory to thinking about their
individual fields. So that's kind of
related to that question. It's not something
we do, but it's nice to see people
are doing that. But we're not anywhere
close right now to-- we're not doing
practical applications. We're just saying we can
speed up these networks by a factor of 100. We can make them more robust. We can do all these
things, sort of basic algorithmic-level work. PETER NORVIG: OK,
another question. JEFF HAWKINS:
"Biological neurons needs to satisfy a lot of
biological constraints. How do we distinguish
which properties of biological
neural networks are key for general intelligence
and which properties are not?" Should I repeat that loudly
or can everyone read that? [LAUGHTER] An answer-- well, this is a
general question about theory. Any kind of theory
in any field, right? When you're in a
scientific field, it doesn't matter what it is. If all this
experimental evidence, all this empirical evidence
is piling up that people don't understand-- at
least, in many fields, they don't understand it-- and then you have to
come up with a theory about how the system works
or how to explain that. And you have to select. You have to pick what things
you're going to focus on and what things you're
not going to focus on. This question's
like, well, how do we know what
properties of neurons that are important, which
ones aren't in our theories? Well, the answer is it's hard. It's really, really hard. We don't decide upfront. We don't say, oh, we just think
these things are important. These things are not important. We use a combination of stuff. Well, and so, the evidence,
for example, about dendrites, is that, well, there's
a lot of evidence they're doing neuroprocessing. About 20 years ago, they
discovered these things called dendritic spikes. And so we now have this
empirical evidence about them and what they do. So they're begging to
be cried out, explained. But generally, the
answer is we go as deep as the theory requires. So our theories touch on
something, some certain types of neurotransmitters. But we don't get down
to the dynamics of gates and ionic channels
and things like that. We haven't needed to do that. If the theory all
of a sudden says we have to get down to
that level, we'll go there. But the way I look at it, our
theories cannot contradict any biological facts. That's the way I look at it. If there's a biologically
determined fact and our theories
contradict it, then we're going to modify our theories. And it doesn't mean
our theories have to explain every biological
fact because we can't do that. It's just too many things. But we'll keep adding
biological details as necessary. And that's the general
answer to that question. And it's not easy. And it takes a long time. And it's fraught with bad turns
and things you try and don't work out and things like that. PETER NORVIG: OK, so
that's a good explanation of how you look at what
biological aspects are important and which
ones can be ignored. I guess there's also a
question of if you're going to build a brain
out of different stuff, could you do it? And what would you
have to reproduce? And what could you do
completely differently? SUBUTAI AHMAD: Yeah. JEFF HAWKINS: Yeah. SUBUTAI AHMAD: You know,
as a computer scientist, the actual physical substrate
is not that important to me. What's important to
me is the algorithm. And you can then re-implement
the algorithm in anything that's Turing compatible. So it really gets
down to understanding the algorithms and the details. And then we can-- you don't have to implement it
with ions and neurotransmitters and biochemical stuff. We can implement it
just fine on computers. I think the key is-- JEFF HAWKINS: Another-- SUBUTAI AHMAD: Yeah, go ahead. JEFF HAWKINS: Yeah,
I agree with that. I interpret the question
slightly differently. It's like, well, one way, you
said what part of the biology we have to model, right? The other part is
like, hey, are we going to be doing
this on silicon? You know, or is it going
to be something else? And are we going to use
the same processes we use to design silicon chips today? Are we going to use
different processes? That's a really hard
question to answer. If you look at the history
of computers, of course, they started out
with vacuum tubes. And then they went to
individual transistors. Then they went to some
sort of integration. Now we're at billions of
transistors on a chip. You know, what's
going to happen here? I don't know. But maybe we'll find some-- it's not going to be like
quantum computing or something like that. But there might
be new substrates of-- physical substrate
to implement this stuff. I mentioned briefly this
company Rain Neuromorphic. That's the proper name of it-- I can't remember. Sorry. SUBUTAI AHMAD: Yeah,
Rain Neuromorphics. JEFF HAWKINS: Rain Neuromorphic. I don't know if they're
going to be successful, but they have a really
interesting, new approach on how to build silicon
chips that do this stuff. So it's just fun
to look at that. And maybe the right. Maybe they're not. I don't know. But I think we'll see incredible
innovation over the coming decades. I know that I can't anticipate
what's going to happen. I'm not smart enough to do that. I don't know if anyone can. PETER NORVIG: Great. That sounds like
an exciting time. Well, it looks like we're
at the top of the hour. Maybe we have time
for one more question. JEFF HAWKINS: Let's see here. I'm looking here. SUBUTAI AHMAD: So if
the representations are distributed, where do the
votes in the voting system get tallied? That's a pretty
specific question. So kind of one surprising thing
that came out of our modeling is that if neurons are
representing hypotheses using these sparse vectors
that Jeff mentioned, where they actually accumulate
the evidence and that's-- well, it can't be in some homunculus,
can't be some external system. It has to happen
within neurons itself. And it turns out we
think it's actually happening via the dendrites. So it's dendrites that
are getting context and accumulating that
context and responding. So think about
each neuron getting a bunch of contextual signals. The votes for
different hypotheses are actually the
different context signals coming in of these
specific sets of neurons. And the more votes you
get, the more likely the neuron is going
to fire first. And by firing first
and really strongly, it's going to inhibit
the other one. So it all has to
happen in this local, massively distributed way. And each neuron is doing
something really simple, but when you step back and look
at the overall function that's being computed,
it's actually doing some sort of voting and
accumulating hypotheses. But at the end of
the day, it all has to happen with
very simplistic rules within individual neurons. There's nowhere else. JEFF HAWKINS: You hinted
at something here, Subutai, which is
probably worth mentioning. Much of our theory we
test by building models. And the models can have varying
levels of biological detail, but they do include the
dendritic processing and the kind of
connectivity we see. And so it's useful. We find out things by modeling. We find out does it really work. We know it's going to work
probably, but how does it work? How fast is it? How quickly to settle? What are the capacities? It's very difficult
sometimes to determine the capacity of these systems. So we've modeled
the voting system, and it worked really well. [LAUGHS] It was really
simple in the end. I mean, we had a couple
of tweaks to get there. But it's not a complex system. And I think that's one
of the most coolest thing about this theory is that
that's what we perceive. That's the thing
that we can remember. We can only perceive the
output of all these voting. And so most of what's
going on in the brain, you're not consciously
aware of it. The voting is the
only thing that goes long distances
in the brain, right? Most all the local
computation in the columns, how could you talk about it? It's not connected to
anything else, right? But the voting neurons
do go long distances, and you can remember
those states. So that's kind of a cool
thing about the whole thing. All right. PETER NORVIG: OK, all right. Thank you so much,
Jeff and Subutai. It's been great
talking with you. Any final thoughts before
we close the session? JEFF HAWKINS: I have a
couple of final thoughts I wouldn't mind sharing. PETER NORVIG: Sure. JEFF HAWKINS: I don't think
true AI is 100 years away. I think it's shorter than
that-- much shorter than that. We're talking one
or two decades. And one thing we didn't bring
up today, which a lot of people are worried about
is the threat of AI, the existential threat of AI. I'm unabashed in saying that I'm
not worried about that so much. And I'm not worried about
the existential threat of AI, because we understand
how these systems work and that they're
not going to develop their own sort of goals. They're not going to-- the issues that are there. But some people
misunderstand that and think that we're not
worried about the risk of AI in general. And we are. We think AI is an extremely
powerful technology, and we have to be very,
very careful how it's used and how it would be abused. But I think people
who are worried about the existential risk, like
AI's going to wake up one day and take over the world
and not listen to us-- I don't think that's
going to happen at all. And I make that argument
in detail in my book. So if you want to know about
that, you can read about that. But I think we all-- Google, us, everybody has
to really take these issues seriously when you create
a powerful technology. It's going to
revolutionize the world. It's going to make life so
much better for so many people. It's going to advance our
knowledge incredibly amount. But we'll have to be
careful how it's used too. So I'll leave it at that. I think it's going to be
an exciting next 20 years. Just super exciting. PETER NORVIG: OK, well, we
had an exciting time today. And we're looking forward to
an exciting next 20 years. JEFF HAWKINS: Yeah, thank you. It was great. PETER NORVIG: Thank you so much. SUBUTAI AHMAD: Thanks, Peter. JEFF HAWKINS: Thanks, Peter. PETER NORVIG: Bye for now. [MUSIC PLAYING]