CHRISTOF KOCH: Thank you dear
Doctor-Father for that kind introduction. That's the term we use
in German, doctor-father. So I'll be talking about
consciousness here, there, but not everywhere,
unlike panpsychism. And for background
reference, Julio and I wrote an ex-archive
manuscript that you can find. The technical stuff is in this
PLOS Computation Biology paper. And then a more
general account is in my book,
Consciousness-- Confession of a Romantic Reductionist. So without further
ado, without making-- for many years, when I started
to talk about consciousness, I used to have a 10
slide preamble why as a natural scientist, as a
physicist, as a neurobiologist, one can reasonably talk
about consciousness. In fact, why as a
scientist, one has to talk about consciousness
if we want to take science seriously,
particularly, the claim of science that
science ultimately can explain all of
reality, because the one aspect of reality that I
have close acquaintance with, in fact, to adopt the
languages of [INAUDIBLE], the only aspect of the world
that I have direct acquaintance with is my own consciousness. I know nothing about the world. The only thing I really know
about the world, the only thing I have direct knowledge of
are my sensations-- the fact that I see, I hear, I feel. I can be angry. I can be sad. Those are all different
conscious states. So the most famous deduction of
Western philosophical thought is Rene Decartes. More than three
centuries ago, Je pense donc je suis [INAUDIBLE]
translated cogito ergo sum. In modern language,
we'd say, I'm conscious. Therefore, I am. So the only way I know about
me, about the world, about you guys, about scientists
is because I have a movie in my head. And so if science is ultimately
to try to explain everything, including dark matter, and
dark energy, and viruses, and nuance, surely, it has to
explain the phenomenon that's at the center of each
one of our existence-- namely, consciousness. And I think in order to do that,
in order to successfully bridge what some philosophers
today call the hard problem, one has to start
out with experience. So rather than giving
you long definitions-- is there a way to
dim the lights here just for this movie please? So in lieu of giving
you lengthy definitions, that typically only
happens in a science once you're at the
textbook writing stage. I'm showing you one of
many, many illusions I could show you. Is it moving? So if you just keep eyes
steady, if you just fix it, for example, the
central cause, or if you fix the cause at the
bottom, then you tell me, what do you see? Just tell me. [INAUDIBLE] All right. What else do you see? Yeah, that's true. But what else do you see? It should be pretty obvious. The yellow squares disappear. Thank you. The yellow squares disappear. Otherwise, you should
come see me afterwards, if you don't see that. So here, we have a
very simple phenomenon. The yellow squares disappear. If really keep your eyes steady,
both of them can disappear. Once you move your eyes,
they make a reappearance. And in fact, it's
counter-intuitive, because the more salient
you make the yellow squares, the more likely they
are to disappear. But it's certainly a
counter-intuitive explanation. So this is simple. It came out in Nature
more than a decade ago. And the thing about this that
Francis Crick and I were always interested in with a difference
in the brain between when you see the yellow square and
when you don't-- you have a particular feeling
associated with it. It feels like yellow. It reminds you of lots
of other yellow things you've seen before. And when you don't, the
photons are still there. They still strike your retina. They still evoke retinal
ganglia cells firing. But you don't
perceive them anymore. And the claim is of myself
and many other people that once we
understand things likes simple forms of consciousness,
like visual consciousness, we're well on the
way to understanding all of consciousness. Since the higher elaboration
of consciousness, like self-consciousness,
et cetera, are just exactly that, the
elaboration upon something that's probably much more basic. So what is it that we
can say today for certain about consciousness? There are many things
that we can see already. People think they are likely--
I constantly-- every day I get manuscripts from
people who purport to explain consciousness. But consciousness
is now a little bit like nuclear physics. There's a large body
of data that you have that any one theory
has to explain. You can't just
start from scratch. So for instance, we
know that consciousness is associated with certain types
of complex, adaptive biological systems-- not all of them. So for instance, the enteric
nervous system, roughly 100 to 200 million [INAUDIBLE]
down here in the gut. They don't seem to be
associated with consciousness. We don't really know why. But if you have feelings
down there, typically, they're mediated to
activity in the insular, if you feel nauseated,
or something like that. It's caused by neurons-- we know
this, from brain stimulation, et cetera, up in cortex. You have an immune system. In fact, you have
several immune systems. You have an acquired immune
system, an innate one, they respond in a
very complex way. They have memory. Once you form antibodies, you
can think of it as a memory. Right now, I just came
yesterday from Seattle. I may well be exposed to some
bug here in the Cambridge area. My immune system is
busy fighting it off, but I have no conscious
access to that. I don't know whether my immune
system is active now or not. I don't know. Yet, it's doing some
very complicated tasks. So we need to ask why my
immune system seems to work in this unconscious mode. We don't know. We know consciousness
doesn't require behavior. Suddenly, in fully
grown people like us we know this, because every
night, we'd go to sleep. And sometimes, we wake up
in the privacy of our sleep, and we have so-called
dreams which are another form
of conscious state. Yet there's a subtle paralysis
that's given out by our brain, because otherwise,
we would act out our dreams, which wouldn't be
a good idea for our bed mates. And of course, that
occasionally happens. Also, there are [INAUDIBLE] in
other forms of clinical cases. When people are unable
to move, for example, the amputee patients,
the frozen [INAUDIBLE] and that were unable to move,
yet were fully conscious. We know similarly
from the clinic, we know that conscious doesn't
require emotions, at least as strong emotions. You can, for example, talk
to veterans that come back from Iraq or Afghanistan. And let's say their legs
have been blown off, and they have
sustained brain damage due to an improvised
explosive device. Yet if you talk to them,
they're clearly conscious. They can describe their state. They can describe
how they're feeling. But there's this
totally flat affect, and they're not concerned
about the future. They're not concerned that their
life has changed radically. So certainly, the
strong emotions don't seem to be necessary
to support a consciousness. We know once again,
from clinic, and we know this from FMI
experiments and others. That consciousness
doesn't require language, nor does it even require
self-consciousness. Self-consciousness is
a very elaborate form of consciousness. Its particularly well
expressed in academics, particular, certain
types of academics who like to write books and
like to introspect endlessly. It's probably counterproductive
to a certain extent. Yet, through most of
my life daily life, when I try to inspect in the
evening, when you're engaged-- I bike every day to work. When you're going at high
speed through traffic, when you're climbing,
when you're making love, when you're watching an
engaging movie, when you're reading an engaging
book-- in all those cases, you're out there in the world. You're engaging with the world. When you're climbing or
biking at high speed, you're very much
engaged with the world. Yet, there's very
little self-awareness, self-consciousness. Simply, you don't have time
to reflect upon yourself when you're out there
engaged with the world. And there's really
no evidence that suggests that aphasic people or
children who can't speak yet, are not conscious. We know from lots of patients,
a particular beautiful patient, this guy who was a
conductor, and BBC made a movie of him, who had
a medial temporal lobe viral infection that knocked out
his entire medial temple lobe, had very dense amnesia. You can track him over 10 years. He has no long term memory. He still is in
love with his wife that he married two weeks before
he had the virus infection. 10 years later, he still thinks
of her as just newly married. It's very endearing. Yet, absolutely, he doesn't
remember anything consciously. But if you talk to
him, he can tell you all about his feelings
of love, how he feels, vis a vis his wife. That he rediscovers
every second minute. It's very striking. So clearly, conscious doesn't
require long term memory. We know from split-brain
experiments done by Roger Sperry,
that consciousness can occur in either
one hemisphere, both a linguistic,
competent one, as well as in the
other one, if they're dissociative by corpus callosum. And we know from 150 years
of clinical neurology that distraction of
localized brain regions interfere with specific
content of consciousness. So you can lose specific
parts of cortex and the cortex primarily, and then you
lose specific content. You may be unable
to see in color. You may be unable to see motion. You may feel your wife has
been exchanged for an alien, because you lost a
feeling of familiarity. That's all due to specific
parts of the brain helping mediate specific content. So we know that it's this
very local association. What about attention
and consciousness? So for the long
[INAUDIBLE] area, I've been quite active in it. Over the [INAUDIBLE]
for the past century and a half or two
centuries, most people have assumed or
assigned to study it, psychologists said
what you attend to is identical to what
you're conscious of. In fact, that people
say early on when I talked about
consciousness, people said, well, you shouldn't really be
talking about consciousness. You should really only strictly
be talking about attention. And the only reason you
talk about consciousness is because it gets
you into the press. Already at the time, with
Francis, we disagreed. In the meantime, we have a
lot of beautiful-- by we, I mean the community-- a
lot of beautiful, modern, probably 80
different papers that have appeared over the last six
to seven years, including one by Nancy Kanwisher
here, very nicely dissociating selective
visual attention from a visual awareness. So then I'll show
you one or two. They're really different
neuron processes with distinct function. And yes, very often, under
laboratory conditions, and possibly even in
life, what you tend to is what you're conscious of. But there's lots of
evidence now to indicate, and I think that's not
controversial anymore, from visual
psychologists, that you can attend to things
that are completely invisible that you're
totally unconscious of. What remains more
controversial, the extent to which you can be conscious of
things without attending to it. Experimentally, that's more
difficult to manipulate. One big advance in this area
has been the development of this technique by a student
of mine who's a professor now. So here it's called CFS,
Continuous Flash Suppression. It works very powerfully. I'll just use a pencil. So in the left eye, let's see. You have a dominant eye. That's the right eye, for
the sake of illustration. [INAUDIBLE] in the left eye, I
put this constant, low contrast angry face-- clearly, a very
powerful, biological stimuli. In the right eye, I put
these flashing mondrians. They change, let's
say, at 10 Hertz. What you'll see for
maybe a minute or two, typically, you'll just see this. And at some point, you'll
get the face breakthrough. It'll be there for two,
three, four, five seconds, and then it disappears again. It's related to [INAUDIBLE]. It's not the same,
but it's related, and it lasts much longer. [INAUDIBLE] typically
last suppression periods in terms of 5 to 10 seconds. This can be a minute or two. So you can now hide
all sorts of things. And people have done
lots and lots of variant. One of the more
interesting variants involve sex, as it always does. So here, let's say
on the left side, you'll put a picture
of a naked person, either a man or a female. And on the right
side, you cut it up. You cut up the picture. And then you hide
it using the CFS, and then you leave this on
for Quaker milliseconds. So if you just ask people
naively, what do you see? They tell you, I just see
flashing colored squares. If you're a distrustful
psychologist, you ask them, well, tell me
if the nude is on the left or on the right. And people [INAUDIBLE]
chance, 50%. And now, what you
do, you have an ISI, and then you put
an objective test. So what you do now, you put
this little, faint grating here. And the grating's
either oriented a little bit to the left,
a little bit to the right. And your task is to say,
is it oriented to the left or to the right? And you can do standards, a
signal detection paradigm. You can get a [INAUDIBLE]. How good are you
at doing this task? And then you check,
how good are you doing this task when it's
on the same side where the invisible nude is
or on the opposite side of the invisible nude? And then what you find, so this
is done in straight people, in heterosexuals. So this is in 10 straight
men and 10 straight women. This is the individual subject. This is the average. This is the d' prime,
so it's a measure of how well you do this task. And what you can see here
that straight men perform this task significantly
better, 0.01, 1%, better if the target
is at the side of the invisible naked female. And they do worse
if it's on the side of the invisible naked man. So in other words,
the attention gets attracted to the
invisible female nude, and it gets repelled by
the invisible naked man. By logic, this
makes perfect sense. If there's a potential
naked mate out there, your brain is mechanismed
to detect that. Women, it's the opposite. Women, their
performance increases in the sight of the naked male. But they're not repelled
by naked invisible female. That's an interesting thing. It's all invisible. That's right. I have this paper by Nancy. So this came out to
a couple of years ago where she uses the
existing technique. She [INAUDIBLE]
she does pop out. And she studies to what
extent does pop out that depend on conscious
[? seeing ?] or not. And essentially, it's a
very similar paradigm. And you do this performance
at the same place where the invisible pop
out was, or you do it in an opposite place. And you could also do
an intentional task to show that there is
this attention allocation, that if you don't allocate
it to this invisible pop out, you can't perform this task. So there are lots
of variants of this to show that you can
attend to invisible things. You don't need to see things in
order to preferentially attend to them. So what some people
are doing now, you really have to
do a 2 x 2 design. So if you study whether
it's in a monkey, like David Leopold has done on a human
or on a mouse, like we want to do, at the
[INAUDIBLE], you really need to do a 2 x 2 design. You need to
separately manipulate selective visual attention and
selective visual awareness. And so you can do that. One, awareness or
consciousness, you can do by manipulating
visibility using masking, or continuous
flash oppression, or any of the many tricks that
psychologists have developed over the last 100 years. And here, you use intentional
manipulations to independently manipulate attention. And certain things you
can do without attention, without awareness. And some things depend both on
attention and consciousness. And there's some that you can
do in one or the other quadrant. But that seems to support the
idea that conscious attention are separate process. They are at least
partially separated, if not fully separated. And they have
different functions, and they're subserved
by different biological mechanisms. And so we're back
to this dilemma that there are many things
that the brain can do. Here, I just list some of them. And of course, you can open the
pages of any psychology journal to see there's a very
large number of things that we can do without
being aware of them, without conscious. Francis Crick and I called
these zombie system. And so if you think about
the neurocorrelates, you have to ask the
question, where's the difference at the neuron
level between all those tasks that you can do
without seeing them? So for example, we've done
experiments, the Simon Thorpe experiment many of you know,
are familiar with where you're shown an image,
and you very rapidly, as quickly as you
can, after say, he does a contained
[? face, ?] another [? face, ?] is it an animal? Not an animal? And some of the
things you can do perfectly well if
they're masked, so you don't even see them. Yet, you still do these
things above chance. And so you have to ask
at the neuron level, where's the difference between
those tasks that require consciousness and
those that don't? And so ultimately,
you can come up with what we call behavioral
correlates of consciousness. You can ask at the
behavioral level in people, in adults-- people typically
means here undergraduate, I should say, because that's
a vast majority of subjects, of cost. But you can also
think to what extent is it true [? in patients, ?]
To what extent is it true in preverbal children? To what extent is
it true in babies? And of course, to what extent
is this true in animals that you train like
mice or monkeys? So these are some
of the behaviors that in people, we associate
with consciousness. Are you going to
ask when I ask you, what did you do last night? And you tell me what you did. I assume you're conscious. In the clinic
emergency room, they have these things called
the glaucoma scale. They ask you certain things. Can you move your eyes? Do you know what year it is? Do you know who's the
president, and things like that to assess clinical impairment? In animals, particularly in
mice that I'm interested in, you can do any non-stereotyped
temple delayed sensory motor behavior, which is a behavioral
assay of consciousness. So that's on the behavior side. But now, the project
over the last 30 years has been to take the
mind body problem out of the domain of pure
behavior and psychology, and into the neuronal domain. And ultimately,
the aim is to look for what Francis Crick
and I call the Neuronal Correlates of Consciousness,
what's also abbreviated as NCC, which is what are the minimum
neuronal mechanisms that are necessary for anyone
conscious [INAUDIBLE]? So whether it's
the yellow squares, or me feeling upset, or having
a toothache, for those three different conscious sensations,
there will, in each case, be a minimum neuronal
mechanism that's necessary to give rise to that. And if I remove that
mechanism by inactivating it using [INAUDIBLE]
adoption, or TMS, or a lesion, then this sensation
would be gone. And then if I artificially
activate this neural correlate by using channel adoption, or
TMS, or some other technique, the feeling should be there. There should be a 1
to 1 correspondence at the individual
trial by trial level. And for any such
conscious percent, there will be a neural
correlate of consciousness. Sometimes, this is trivial. Anything that the mind
does, the mind, we believe, is physically supervenient
upon the brain. So there has to be
a brain correlate. The question is is
there something common about all those correlates? Let's say maybe they all involve
layer five pyramidal cells. Maybe they all are
involved with oscillation. Maybe they're all involved with
a high degree of synchrony. Maybe they all involve activity
from the anterior right insula. These are all different
possibilities people have offered. Maybe all involve large
range projection neurons in docile [INAUDIBLE]
prefrontal cortex [? a la ?] global workspace. These are all
different possibilities that people are studying so
if we think about vision, we can ask the question,
so for example, is the eye, when I
see you, is my eye the activity of [INAUDIBLE]? To what extent is
that a neurocorrelate of consciousness? Well certainly, right now,
if you'll [INAUDIBLE] way to record from my eyes,
it would certainly correlate with what I see. Yet, the eye itself
is too different. The properties of [INAUDIBLE]
except a few properties. It's too different from
my conscious perception. For instance, there's
a hole in my eye. It's called a blind spot. It doesn't show up in my vision. There are almost no cones. There are very few
color opponents. He's selling the periphery. Yet, my entire visual
field looks colored. I can't constantly move my eye
three to four times a second. Yet, my percept is very stable. So from things like that, we can
infer-- an artist inferred this already in the 19th century
that the retina is not the place where consciousness
actually happens. It's not where the
[? neuron ?] mechanism give rise in a causal
way to consciousness. That has to be in a
higher part of the brain. Furthermore, I
can close my eyes, and I can still imagine it. And I tend to dream
a lot, and I tend to remember my dreams a lot. And so I have very
vivid-- this night, I was visiting this
bloke in Kazakhstan. I had no idea how I
knew him, but here I was in Kazakhstan, a very
vivid memory of Kazakhstan. I can tell you all about it. And I had a visual memory. It was a picture in my head. But clearly, I was
sleeping in the dark, and my eyes were closed. So clearly, I can see things
without my eyes being active. Let's look at some other
parts of the brain. So of your 86 billion
neurons, 69 billion of them, more than 2/3 are in your
cerebellum, the granual stuff. In fact, more than 2
out of 3 of your cells-- they're little cells, they're
four, stubby little dendrites in the cerebellum. Yet if you lose them,
or never had them, so this just came out,
this is a patient. She was discovered
recently in China. She's 24 years old. She is slightly mentally
retarded, just a little bit. And she moves in a clumsy way,
and she has a little bit speech impairment. But you can apparently
perfectly converse with her. It took her until
six years of age to learn how to walk and to run. And then when people scanned
her, they found this. It's complete, and they did DTI. It's a quite nice paper,
if you want to look it up. It's a complete absence
of the cerebellum. So this is one of
the few rare cases of agenesis of cerebellum. No cerebellum whatsoever. She lacked 69 billion neurons. Yet, the doctors talk about it. She's clearly fully
conversant, and she can clearly talk about internal states. So you don't apparently seem
to need your cerebellum. Now, there's no such
case for cortex, where you have no
cortex whatsoever, and you're still a
conscious person. So that seems to
tell us a cortex seems to be much more
essential for consciousness than the cerebellum. So we have to ask from a
neurological point of view, but more interesting also, from
a conceptual, theoretical point of view, what is it
about the cerebellum that it feels to give rise
to conscious sensation? It has beautiful
neurons [INAUDIBLE]. [? The ?] idea is the
mother of all neurons. [? They have ?] beautiful
dendritic spikes, and complex spikes,
and simple spikes, and everything that
[? parental ?] cells have in glorious complexity. And there are lots of neurons,
and they have action potential, everything else you
expect in a real brain, [? yet ?] you remove it,
and patients don't complain of loss of consciousness. When you get a stroke, a
vital or gunshot there, people have a [INAUDIBLE]. They have motor
[INAUDIBLE], they never complain about anything,
loss of consciousness. So we have to ask why. So Francis Click and I
famously made this prediction in a Nature article
almost 20 years ago now. While we say the
neurocollate of conscious doesn't reside in the primary
visual cortex-- that yes, much interceptual activity
correlates with V1. But that's not where a visual
conscious sensation arrives. Lots of evidence
for and against it. Let me just show
you the latest one. It comes from the
[INAUDIBLE] [? Tanaka ?] Lab. It's in human FMI. Although, David Leopold has
done a similar experiment in monkeys. So it's one of these
2 x 2 dissociations that I mentioned before. So across here, this
is an artistic rending of what the stimulus--
perfect, think you. But just remind to
give it back to you. It's my whole keys. OK. So I better not
walk off with them. So this is a rendition. At the center here
was always a grating, a low contrast grating that was
moving, I think, left or right. And I think that
at some point, you had to say whether it
was moving left or right. It was always there. But sometimes, you saw
it, and sometimes, you didn't see it, because
they did this manipulation. So sometimes, you had to
attend to the letters, or you had to attend
to the gratings. So here, you manipulate
the visibility. And here, you manipulate
whether or not you're attending here, or
whether you're attending there. So it's a 2 x 2 design. And then they look
at the FMI area in the primary visual
cortex that corresponds to the central area here. This is in two subjects. The paper was four subjects. So here, they have
the two traces, when you have higher tension
with or without visibility of the central grating. And here was when you had
lower tension to the grating. So in other words,
you're [INAUDIBLE] to the periphery whether you
saw it or you didn't see it. The same thing here. So in other words, what
[INAUDIBLE] seems to care about is whether or not you attend
to the central grating or you didn't attend. Whether or not you
saw it, here or here. These two curves
totally overlap. It didn't make any difference. Of course, this is FMI. It's not single neurons. Although, David Leopold has
something [INAUDIBLE] neuron. But this gets at the
technique that people use to try to untangle
consciousness form attentional related processors. So in terms of cortex, we
have pretty good evidence that it doesn't seem to
involve primary visual, primary auditory, primary
somatosensory sensory cortex. And it seems to be
primarily involved higher order cortex-- parietal
cortex, temple complex, and prefrontal cortex. How many people have heard
about this part of the brain? Do you all know it? You should. Remember where you
were when somebody mentioned the claustrum. So the claustrum is
implied by its name. It's a hidden structure. It's [INAUDIBLE], its yay big. It's big, like this. It's roughly here,
under the insular. You have one here and one here. You can see it in all mammals. Mice definitely have it. In fact, we have
a few genes that are uniquely expressed there. And here, you can see these
pictures from Nikos Logothetis. You can see it here. It's a sheet-like structure. It's lying underneath
the insular and above the basal ganglia. And it's in white
[? matter. ?] It's between the external
and the extreme capsule. So it's embedded in
the white matter. It's a thin layer of cells. In us, it's maybe between
0.5 and 2 millimeters thick in humans. And as I said, it's
like this elongated. It's difficult. There are few
patients with legions, because it gets applied
by two separate arteries. And if you want it
to lesion chemically or pharmacologically, you have
to do multiple injections, because it's very elongated. Now, this is a recent paper. But it's known from the rodent
literature, as well as a cat, as well as a monkey, as
well as a human literature. So this is a fancy version
of multispectral DTI. The claustrum here
connects with all the different cortical
areas in this very nice, topographic manner. So you have a visual
part of the cortex. You have a somatosensory
part of the cortex. You have a motor, and you have
a prefrontal part of cortex. And there are few
interesting symmetries, like it gets input from
both [? ipsy ?] and contra, but the only projects through
[? ipsy. ?] There are a few interesting things like that. So like the
thalamus, it's highly interconnected to the cortex. But unlike thalamus,
it doesn't seem to be organized in 45
different, separate nuclei. But they all seem to be a single
[INAUDIBLE], a single tissue. So Francis Crick and
I, based on this, this was a structure
function argument that we made similar to
the much more famous one that he made with Jim Watson. So he first wrote about this,
and he spoke then later on. We wrote this paper. So you have this unique
anatomical structure in the brain, and you
ask, what is its function? It seems to integrate
all information from the different
cortical regions. So we thought at the time it was
associated with consciousness. It binds all the information
from the different non-sensory motor or planning areas together
into one coherent percent-- a little bit like the conductor
of the cerebral symphony, of all these different
actors that play in the different visual areas. And they both project two and
get input from this claustrum. So one obvious function
it could subserve would be to coordinate
all of them. This was in fact the very last
paper that Francis worked on. In fact, two days before
he went into the hospital in June 28, 2004, he
told me not to worry. He would continue to
work on the paper. Here is the actual
paper, the manuscript. And on the day he passed away,
2 hours before, Odile, his wife, told me how in the morning,
he still dictated correction to this manuscript. And in the afternoon, he was
hallucinating a discussion with me about the claustrum. A scientist to the bitter end. So this paper appeared. And then in 2005, nothing
happened for 10 years. Well, no. It's a bunch of pharmacological
studies and molecular study. But then this paper came out. It's a pretty cool paper. So I have to warn you,
it's a single patient. There are all sorts of
problems with single patients. But it's an interesting
anecdote that gives rise to
possible experiments that one can easily do,
for instance, in rodents. So here, you have a patient
who's an epileptic patient. And as part of the
epileptic work up, you put electrodes
into the brain to try to see which
areas are eloquent, and which areas
are not eloquent. This is a common
procedure that's done. So typically, what
happens, we know this now from 120 years of
direct stimulation using microstimulation
of human brains. So typically, what happens? Nothing happens. Typically, when you
stimulate at the [INAUDIBLE] on the human brain, human
cortex, nothing will happen. Unless we [INAUDIBLE],
the patient will have discrete sensation. We'll hear something. We'll see something. Sometimes, there will
be motor activity. Sometimes, there
will be vague body centered feelings that's very
difficult to express in words. In this case, in one
electrode, the patient-- the easiest way to describe
it-- turned into zombie. Every time they stimulate,
the patient would stay ahead, would stop for as long
as the current was on, between 1 and 10 seconds. If the patient was starting
to doing something simple like this, the patient
would stay ahead and continue to do this. The patient said
something very simple, like a word, two, two, two. The patient would continue to
say that while staying ahead. The patient had no
recollection of these episodes. And the electrode was
just below the claustrum. So once again, it's
a single patient. So it's very difficult to
know what to make of it. But it's certainly challenging. So it's interesting enough that
one can learn, so here it is. Let's see the location
of the electrode here just underneath
the claustrum to do further
experimentation in animals. This obviously, you can't
repeat this in a patient. So there are lots
and lots of people who are looking for the neural
collates of consciousness, and all sorts of different--
typically cortical structures. Of course, we have to ask,
what about consciousness in other mammals? So here, you see two
female mammals, my daughter and her guard dog, her beloved
German Shepherd, [? Tosca. ?] Now, we think not only
because I'm very fond of dogs, but biologists,
at least, believe that certainly all mammals
share most essential things except language with humans. And we say that because their
brains are very similar. If I give you a little cubic
millimeter of human cortex, of dog cortex, of mouse
cortex, only an expert armed with a microscope
can really tell them apart. The genes are roughly the same. The neurons are
roughly the same. The layering is
roughly the same. It's all basically the same. It's just more of it in us. We have roughly a thousand
times more than a mouse, and it's thicker. Of course, we don't
have the biggest brain that's given to elephants
and other structures. So for reasons of evolutionary
instructural continuity, I think there's no reason to
deny that certainly, animals like dogs can be
happy and can be sad. And if you're around
a cat, there's no question about it can
be lonely-- other states that we have. Maybe less complex, but
certainly also share the gifts of conscious with us. Right now experimentally,
it's very difficult to address a question
to what extent this is true of animals that
are very different from us. For instance, cephalopods
that are very complex, that have imitation learning
and other very complicated ways, or bees-- the
[INAUDIBLE] that have very complicated
behaviors, whose brain, I'd like to remind you,
the mushroom body, has a circuit density
10 times higher than the density of cortex. It's very difficult right
now to know to what extent does a bee actually
feel something when it's laden with
nectar in the golden sun? We don't know. With mammals, it's
easier to do, because you can do tests that are very
similar to the tests we could do in humans. But ultimately, you're left with
a number of hard questions that was out of theory, you
cannot really address. And that's what I want to
talk in the second part. So if it's true that the
claustrum is involved, let's just pause at that. We really want to have a
deep, theoretical reason why the cluastrum? Why not the thalamus? Why the claustrum? Why not some other structure? Why not the cerebellum? I was just telling
you empirically, the cerebellum does not seem
to be [INAUDIBLE] conscience. Or why? It's curious. They have lots of neurons
and everything else. Why is it not involved
in consciousness? What theoretical reasons? Why not afferent pathways? Why not cortex
during deep sleep? If you'll recall, from a single
neuron in a sleeping animal, it's that easy to say,
is it sleeping or not? What's different? Why in deep sleep does it not
give rise to consciousness? If you think synchronization
is important, well, then your brain is highly
synchronized during a grand mal seizure. But of course, that's when
we lose consciousness. So why is that? Then there are
more hard questions that are very difficult to
answer without a theory. You have a patient like this. This is one of
[INAUDIBLE] patients. Everything is dysfunctioned
except this isolated island of cortical activity. And he says one thing. And he says it again,
and again, and again. He says, oh shit,
oh shit, oh shit. That's what he says
eight hours a day. It's like a tape
recorder that's stuck. Well, if this person explains
something, a little bit maybe. Right now, it's very
difficult to answer. What about
prelinguistic children? Either a newborn infant or
a preterm infant, like this. What is it, a 28
week old infant. Or what about a fetus? At what point does a fetus
make a transition, if ever, between feeling nothing,
being alive clearly, but not feeling anything
like we do in deep sleep. And at what point does
a fetus feel something? Right now, we have heated
arguments involving abortion and other things based
on legal and political reasons. But we really don't know from
a scientific point of view how to answer this question. What about anesthesia? For example, ketamine
anesthesia, when your brain is highly active? And of course, there
are lot of cases of awareness in anesthesia. What about sleep walking, when
you have an individual that with open eyes, can
do complicated things, including driving and all
sorts of other things? To what extent is this
person conscious or not? As I mentioned,
what about animals that are very
different from us, tha don't have a cerebral
cortex and a thalamus, but have a very
different structure, but are capable of highly
sophisticated behavior? Like an octopus, or a bee,
or a fly, or c. elegans? And then lastly, what
about things like this? We live now in a world where
more and more-- particularly here at MIT and in
the center, we're confronted with
creatures that if humans were to exhibit those
abilities, nobody would doubt that they would be conscious. If you have a severely
brain injured patient and she can play chess,
or she can play Jeopardy, or she can drive a car, all of
which things computers can do, there would be no
question in anybody's mind that this person
is fully conscious. So the basis of
what reason do we deny or not deny these guys a
more advanced version of Siri? Remember the movie,
Her, Samantha? How do we know? We need a theory that tells us
whether Samantha is actually conscious or whether
she's not conscious. Right now, we don't
have such a theory. So what's really beyond studying
the behavioral correlate and the neuronal correlates
of consciousness, which is what I do and what
lots of other labs now do, we need a theory that
tells in principle, when is a system conscious? Is this one conscious? And I want a rigorous
explanation for why it is or why it's not conscious. What about these? What about these? So we need a theory
that takes us from this, from
conscious experience, to mechanisms and to the brain. This incidentally also
bypasses the heart problem that Leibniz first talks
about in his famous example about when he walks
inside the mill. And more recently,
William James, of course. And then more recently,
David Chalmers talks about. It is probably true that
take a brain and ring consciousness out of it is
probably truly a hard problem. Although, one has to
extremely skeptical when [? philosophers say ?]
something is hard, and science can't do it. Historically, they don't have
a very good track record. I think of predicting things. But I think it's
much easier if we start where I think any
investigation of the world has to start,
namely, with the most central fact of our
existence, my own feelings, my own phenomenology. So now, I'll come to the
theory of Giulio Tononi, who's a psychiatrist and
a neuroscientist, a very good friend,
and a close colleague. Disclosure here, we have
published many papers together at the University of
Wisconsin in Madison, who has this integrated
information theory that's worked on with many people. But it's really his theory. And there are various
versions of it. And so for the latest, I urge
you, if you're interested, go to this [INAUDIBLE]
computational biology paper. So here, just like in
modern mathematics, you start out in
axiomatic approach. The idea is that you
formulate five axioms based on your phenomenological
experience, the experience of how the world appears to you. These axioms should do what any
other axiomatic system does. They should be independent. They should not be
derivable from each other. And then together, they
should describe everything that they used to describe
of both the phenomena. And then from these
axioms, you go towards a calculus that
implements these axioms, the meat of the theory. And then you test this
integrated information theory on various [INAUDIBLE] tests
that you can do in the clinic and that you can do in
animals and in people. So there are five axioms here. Reactions themselves,
I think, are relatively straightforward to understand. The first axioms, the
axioms of an existence. In order for anything to exist,
it has to make a difference. This is also known as
Alexander's Dictum. If nothing makes a
difference to you and you don't make any
difference to anything, then you may as well not exist. Well, I remind you, in physics
this principle is used. For example, in the
discussion of ether. The physicists certainly know
the ether was this notion that was used around 1900. It fills the space
as infinite, rigid. Yet, it's also
infinite flexible. It had to explain a
number of discerning facts about the cosmos at large. Then Einstein's didn't
need the ether anymore to explain anything. Now, the ether
could still exist. But it has no causal connection. It doesn't make any difference. Nothing makes a
difference to it. It doesn't make a difference,
so therefore, physicists don't talk about it anymore. So I think it's a deep principle
that we use, whether we know it or not. So the axiom of existence
explains exist intrinsically. This is very important. Not observer dependent. My conscience exists totally
independent of anything else in the world. It doesn't depend on
the brain looking down. It doesn't depend on anything
else looking down at me. It just exists intrinsically. [INAUDIBLE] experience
is structured. It has many aspects. So this is a famous
drawing from Ernst Mach, if you'd see
actually what it is. It's him. He tried to describe
what he sees looking out of his one eye. So he can his mustache,
the bridge of his nose, and he looks out at the world. And their world has
all sorts of elements. It has objects in it. It has left. It is right. It is up. It is down. It's incredibly rich. There's all these
concept images. It's next to each other. So the books are to the
right of the window, which is above the floor, et cetera. So each conscious
perception is very rich, which brings me to this
next axiom; the third axiom. Each axiom at each
experience is the way it is because it's
differentiated from a gazillion other possible
perceptions you can have. So if you go back
to the scholastic, they actually thought
a lot about this. Some of them, I think,
are really better than the analytic philosophers. They call this the [INAUDIBLE],
the [INAUDIBLE], for example, some of them, people
like [INAUDIBLE]. That experience is
differentiated from one out of many. If you imagine, everything I see
right now out of my left eye. And you imagined, I see this
one unique thing that I'll never see again in the
history of the universe compared to everything else
I could see-- every movie, every frame of every movie
that's ever been made or will ever be made in the
history of the universe. Plus, all smells, and
all tastes, and all emotional experience. So it's incredibly
rich, both what you see what you don't see. Even if you wake
up disoriented-- you're jet lagged, you
traveled nine hours, you wake up at 3:00 in the
morning in your hotel room. All you know, it's black. But that black, it's not
just a simple one bit, because that black is
different from anything else that you might see and
that you have ever seen. So even that black is
incredibly [INAUDIBLE] to differentiate it from all
other possible experiences. Next, so philosophers have
much remarked upon this. They call this
holistic or integrated. Each experience is
highly integrated. It is one. So for example, you
don't see the left womb separate from the right womb. Of course, you can do this. But then you're seeing
different things. Whatever I apprehend,
appre I apprehend as one. It's a unitary, integrated,
holistic percept. It's just like when I
look at, for instance, the word, honeymoon. I don't see honey
and moon, and then I see a [INAUDIBLE] the honeymoon. I see it as
honeymoon, what people do once they get married. And lastly,
experience is unique. At any one point in time,
I only see one thing. Unlike in quantum
mechanics, I'm not a superposition of different
conscious percepts. The Christof-- my
narrative self, the one that looks
out at the world and sees all of you, that
sees this in a movie. That, there's only
one experience. And not different experience, my
left brain and my right brain, unless I'm dissociated,
in a dissociative state, as sometimes happens, or split
brain patients, something else. But a normal brain
is integrated. I have one experience. It's at one level of
granularity, whatever that may be, neurons, or
sub-neurons, or super neurons, or columns. It's at one time scale. It doesn't float
infinitely many timescale, and I'm a superposition of any. I'm only one. So now, it gets a
little bit tricky, because now, we have
to move from the axioms to the postulate. It's nice having these axioms. And most people find a lot in
these axioms that resonates. Although, some people
say, well, maybe we need to postulate
an additional axiom. Maybe yes, maybe no. But the bigger challenge
is to move to mechanisms, because we are scientists. We're not just philosophers. So it's not just
OK to speculate. But you want to
speculate, but in a realm where you can ultimately make
predictions about what is and what it's not conscious, and
where you can make predictions about neural correlates,
and whether machines ever will be or won't be conscious. So the existence, the
first axiom says, well, and this is in some sense,
the most difficult one to get across. That experience, you
have a mechanism, like the brain, or a
set of transistor gates. And they're in a
particular state. So some neurons are firing. Some neurons are
not firing here. So we do everything, because
it's actually very difficult to compute things with IT,
because it very quickly explodes, in terms of the
number of possibilities you have to compute. We have this simple gate. You have five neurons. Three of them, it's an
exoneuron, an or and an and. And they're either on or off. Here, they're off. And yellow means they're on. So here, you have these
gates, if you want. And some of them are
on, and some are off. Now, what's really
important is that expense is generated not only
by a set of mechanisms in a particular
state, like a brain where some neurons fire and
some neurons don't fire. But also, it has a
cause/effect repertoire. And I'll come back to what that
means in a couple of slides, in terms of causation,
because this state has to come from somewhere, and
it's going to go somewhere. Remember, I said you can only
exist if you make a difference. In this case, because
consciousness is not dependent on an
external observer, you only exist if you make
a difference to yourself. In other words, you have to have
some cause within your system that caused you. And you have to be able to
call things within your system. So in this case, when you're
in this state, so this is off. This is off, and this is on. So here, there are three neurons
corresponding to a, b, and c. You can say, well, given, I
find myself in this state. These are the
various states that could have come from
[INAUDIBLE] in the past, assuming this is
a discrete system. And these are the
different states I could go to,
given in this state. So it has a pass. It has a cause repertoire. It was caused by some of these
states with this probability. And so I could have come
from these different states. And it has possible different
ways to go into the future, depending on my input here. And so for example, if I were
to do an experiment by halo, channel of adoption, I
eliminate some of these. And this is going to change. The consciousness
of this observer is going to change
and a predictable way, even though I may
not change the state. I'll come back to
that in a second. So experience is
generated by any mechanism that has a
cause/effect repertoire in a particular state. So composition-- so
expense is structured. There's many aspects. So in this case, there
are many sub-components. So I can look at the
entire system as a whole, or I can look at each
of the sub-components. I can look at these two,
this tuplet, this end tuplet. I can look at that neuron, and
that neuron, and this neuron. And so in principle, I have to
look at the power set of all these different mechanisms. Experience is different. Say that it can be
one out of many. So it is what it is, because
it differs in particular ways from all the other experiences. So in this way, so once again,
you have these mechanisms. And you have all these
different sub-components. And each one of them has a
particular cause repertoire. And a particular
effect repertoire. So ultimately, this
structures lives in a space that has so-called
[INAUDIBLE] space that has as many dimensions as a
dimension you have in your past and in your future. So here, you have three neurons. So in principle, you have
eight states in the past, and you have eight
states in the future. Some principles,
this structure lives in a 16 dimensional space. It has to be integrated. Experience is unified. So here, you compute a measure
of how integrated this system is by essentially
computing the difference between different forms of you
want entropy, using something like [INAUDIBLE]. Or here, they actually a
metric, a distant measure called the EMD, the
earth mover distance. So essentially,
it says, you look at all these different
states from all the different
elementary mechanisms. And then you look
at to what extent could they exist by themselves? So if you have a system
like a split brain that consists out of
two independent brains, than the joint entropy
is just a product of the individual entropies. So in that sense,
the system says, this system doesn't have its
own autonomous existence. You only exist if
you're irreducible. If you are irreducible
to a simpler system, then you don't exist. Only the simpler system exists. So here, you compute
to what extent the system is irreducible
by essentially looking at all possible cuts,
all possible bipartisan, all possible tripartition. So this is where the theory
gets practically very difficult to compute. And you look at the one
that minimizes that and that minimizes the
information exchange between the
different partitions. So if you had a system,
like a brain that was split that was
cut by the surgeon, you essentially have
two independent systems. They exist, but
there isn't anything like to be a brain as
a whole, because it doesn't exist at its own level. Only these sub-components exist. And lastly, you say, exclusion. So in any given system,
you only pick one system, the one that maximizes
this irreducibility, this number called phi. So this is what phi is about. So phi, in some
sense, it's a measure of the extent to which
a system's irreducible. You look at all
possible subsystems at all possible
levels of granularity. So you can look at it at small
granules and at high granules. You can look at different
spatial granularity, different [? tempered ?]
granularity. And it's like a
maximum principle. In physics, you pick the
one that maximizes it. And that is a system
has consciousness associated with it. And then you come to
the central identity of the theory that
essentially says, if you have a mechanism
in a particular state, with a particular cause
effect repertoire. The central identity
posits that the structure of the system in this high
dimensional [INAUDIBLE] space, this space spanned by
all the different states. It could take in the path, and
it could take in the future. Depending on where
you are right now, that space is what
experience is. So in a sense, it's
the Pythagorean program run to its completion. Because ultimately, it
says, what experience is is this mathematical
structure, this [INAUDIBLE] in this very high
dimensional space. And there are two things
associated with this. A structured self
associated with it gives you the quality
of that structure. So whether you've seen red,
or it's the agony of a cancer patient, or it's the
dream of a lotus eater-- all are what they are. All are experiences in
these [? trillion ?] dimensional spaces. That's what they are. That's what the
quality of experience is, the voice inside your head,
the picture inside your skull. They are what they are because
of this mathematical structure. And the quantity
of the structure is measured by this
number called phi. And so there are three sets. So if you look at a system,
[? there's ?] a main [INAUDIBLE], the component
that's most irreducible, that has the highest phi. And that is
currently what it is, the neurocorrelate
of consciousness to switch over to the
language now of neurology. Now, I know there's
no way in hell that you can convey the
complexity of this theory in 10 minutes. So right now, let's
just go with that. All the mathematics, you can
ask me questions afterwards. All the mathematics is
spelled out in the papers. It makes a number of
predictions, some of which are comparable with this very
ancient philosophical belief, called panpsychism. So panpsychism, I first
encountered it at undergrad in Plato, of course. It's been prevalent. It's been a theme among
Western philosophy, including Schopenhauer. [INAUDIBLE] is probably the
contemporary philosopher most closely
associated with that. And then, of course,
in the Dalai Lama. It's a very powerful
part of Buddhism. But there are also
certain things where the theory
makes very strikingly different predictions,
particularly when it comes to computers. So the theory says, a system
consciousness can be graded. So you have a system like this. It has 3 x 5 neurons, 15
neurons-- quote, neurons. These are simple switches. And here, what you do, they're
interconnected in this way. And you can now compute phi. The theory, whether
you think it's relevant or not, whether it
explains conscious or not, it's a well defined theory
that takes any mechanism like this in a particular state
and assigns a number to it. So in this case, it's
a dimensionless number in this [INAUDIBLE]. It's not a bit. It's 10.56. It tells you how
irreducible the system is. In some sense, how much
does this system exist? The larger the number, the
more irreducible the system is. And in some sense, in some
real ontological sense, the more it exists. Now, you add noise
to these connections. All you do, you leave all
the connections there. But you add more and
more noise to it. And then you can see the
overall phi goes down. There's less integration now,
because you've injected entropy into the system. You can also compute the
phi of these little guys, because once again,
in principle, you compute phi with all
possible configuration of elements, and you pick
the one that's maximum. So here, these little guys
still are very low phi, lower than the whole. But then you've added
now so much noise that suddenly, the
system disintegrates. And now, it
disintegrates into five separate conscious
systems, each of which is separate conscious at
a very low level because of all the noise. Sorry. These numbers are
switched around. It should be the other way-- and
the little guys have now more phi than the big guys. So it says that consciousness is
[? graded. ?] This, of course, reflects our own
experience in our lifetime. And day to day, your
consciousness wax and wanes. When you're a baby, it's
different than when you're a fully grown adult.
Or even as a teenager, you don't have a lot of
insight into your own behavior. You do certain things. You don't know why. And of course, if you
become old and demented, then your consciousness
goes down. And even during the day,
when you haven't slept in a day or two, or
you're totally hung over, your conscience
can wax and wane. So this theory
very much reflects that conscious is graded. It's not an all or none thing. Very interesting prediction. This theory predicts that any
feed forward system has 5-0. The reason is
essentially the system, as I said, the first
axiom of existence, the system has to make
a difference to itself. In other words, it has
to feed back to itself, and it has to get
input from itself. The strictly feet forward
system does not do this. Now, of course, interestingly,
if you look at machine learning algorithms, you look at
standard convolution nets, they're all feet forward. So what the theory says, yes. You can have a
complicated neural network that does complicated
things like the text where there's a [INAUDIBLE]
present or a face present. It can do all sorts
of things; anything that you can do with
standard machine learning. Yet, this system will
not be conscious, because it doesn't have the
right cause effect structure. It doesn't have the
right causal structure. So this also means
there isn't any Turing test, because there
can be, of course, Turing test for intelligence. But Turing tests for
consciousness doesn't work. It's not an input
output manipulation. It's not that you
manipulate the input and you look at the output,
because you can clearly do that for strictly
feet forward network. And the theory,
whether you believe the theory relates
to consciousness, it's a different matter. But the theory
quite clearly says, phi associated with any feed
forward networks will be 0. Which also means you can
have two separate networks. You can now have a complicated,
heavy feedback network. And of course,
that's an equivalent. For a finite amount
of time step, for any complicated
feedback network, you can unfold it and turn it
into much more complicated, purely feet forward network. So both systems will do
exactly the same thing, they're isomorphic, in terms
of input output behavior, that one, because of its causal
structure-- so the theory says, will be conscious. The other one, not. So let's look at some
experimental predictions. So that the neural
correlates of consciousness are identical to the main
complex, the one that maximizes phi in a particular state, with
its own particular cause effect repertoire. I emphasize that because
of some experiments that we can begin to do now. So first of all,
there was this paper from the [? Tononi ?]
Lab 10 years ago, called "Zap and Zip." So what they did,
they take volunteers and sleep deprive
them for a day. So these are healthy
undergraduates-- sleep deprive them. So then, they can
sleep in the lab, equipped with 128 EG channels
and with a TMS device. And you do the TMS device. You go to subthreshold doses
so the person doesn't wake up. And then essentially,
you tap the brain. And then you look at in terms
of the EG, the reverberation, you do [INAUDIBLE] or something,
so then EG source localization device. And then you just
compute the complexity of the result in brain wave. Think of it a little bit
like you have a bell, like the Liberty Bell, and
you ring it with a hammer, and then you can
hear it resonate. And if it's really good, it
resonates for a long time. That's a metaphor. So here, this is in awake. The time scale here,
this is-- I don't know, 300 milliseconds or something. You give the TMS pulse here. And then you can see
this reverberation. So you do it here
over the precuneus. And then you can see, it
travels contralateral. It reverborates around cortex. And here, if you do the
underlying source localization. So by some measure,
it's well integrated. It's what cortex does. So what they're trying
to do is they're trying to derive a simple
empirical measure that you can use in the clinic to measure
whether a patient in front of you may be severely impaired
and is unable to speak, is actually conscious or not. So in this paper, they
did this for wake, and then they did
this for deep sleep. So if you have subjects
that are now in deep sleep, you get a response locally
that's in fact even bigger, depending on up/down
states, et cetera. But then the complexity
is much less, then it very quickly stops. It doesn't really
travel nearly as far. The brain is much
more disconnected. What they can now
do, they have done this in a large clinical study
of a hundred subject or so patients in Italy. And they're now trying it at
a bunch of different clinics. What they're doing now is
at the single subject level, not at the group level, because
to be a clinical useful device, you need to have it at the
level of individual people. You do this in normal
subjects where you can sleep. You do it in volunteer
anesthesiologists that become anesthetized
using three different types of anesthesia to try to test
what does this measure do in anesthesia? You do it in persistent
vegetative state. You do it in vegetative state. You do it in minimal
conscious state, and you do it in
locked in syndrome. We know from the clinic folks a
lot [INAUDIBLE] of these people are conscious. Minimal conscious state, they
can be sometimes conscious. Persistive vegetative
state, they don't appear to be conscious. So what you can do is
essentially, they zap a cortex. Then they get this underlying
cortex-like activity pattern. And then they compress
it using [INAUDIBLE]. So they call it zap and zip
method to get a single number. So this method ends up with one
number, the PCI, Perturbational Complexity Index, and
it's a scaler number. And if it's high, the
patients tend to be conscious. These are the
conscious subjects. And if it's low, due to
other clinical measures, we know what the
patient is unconscious. So it very nicely segregates. It didn't work in two patients,
two severely impaired patients. In both of them, it predicted
they would be conscious. And indeed, two days
later, quote, they woke up using other clinical criteria. So they shifted from
a vegetative state, when people are non-responsive,
to spoken commands, or other things they switch
into a minimal conscious state. So that's pretty cool. It's really very
exciting, because it could be for the
first time that you have a clinical useful
device, it tells you, is this patient in front of
me actually conscious or not? And in the US, there are
roughly 10,000 people in these states like a
persistent vegetative state. Some of you will
remember Terry Schiavo. So she was an example of that. So you can try to
explain some of these. So most famously, why
are these cerebellum not involved in consciousness? Well, the main
hypothesis is if you look at very simple computer
modeling of networks and connectivity
with respect to phi, cerebellum's really
organized as a bunch of two dimensional sheaths. You have the [INAUDIBLE]
cells, and you have the parallel fibers. So it doesn't have a
three dimensional network connectivity in the sense
that [INAUDIBLE] cortex has, which is how we
interconnect them as a small world connectivity. So if you have very regular,
almost custom-like array of these two
dimensional slabs, you get a very low phi compared
to having a cortex where you have heterogenius elements
of different cell types that are interconnected in
a small world connectivity, you get much easier,
very high values of phi. To come to an end here, let's
look at some cool predictions. So let's do this. Now, we're thinking about
channel adoption here and [? halo ?] in humans. But you can do this in
mice and in monkeys also. So first of all, you're
looking at a grey apple. And you have, let's say, an
activity in your favorite color area, [INAUDIBLE]
on here before. And it goes up to LIP,
because it combines with the spatial information. And you're conscious
of a grey apple. And you say, grey apple. So now, you make the
following experiment. You put into the terminals
of these neurons here, or you inject them with halo. So the halo is expressed
throughout the neurons, particularly in the
synaptic terminal. So now, you shine
green light on them, and you turn off the
synaptic terminal. So nothing changes. This is counter-intuitive,
because nothing changes in activity here. In both cases, neurons in
let's say, your color area are not firing,
both here and here. So if I just look at the
neurons firing, I see, OK. Both cases, the
neurons are not firing. So in those cases, you'll
say, the apple is grey. But here, they're not firing. They could have
fired, but they didn't fire, because there wasn't
any color [INAUDIBLE] in it. Here, they're not firing,
because they've been blocked. They've been prevented by my
experimental manipulation. So in this situation, here,
I've reduced the cause effect repertoire. I've dramatically eliminated
the effect of these neurons. Although, they still fire,
cannot have any effect more downstream. And the theory says, it quite
clearly makes a prediction that although the firing is
the same, the [INAUDIBLE], the consciousness
will be different. Here, you'll probably
get something closer to anosognosia. So you get what people call
anosognosia with achromatopsia. In other words, the
patient will say, well, I don't see any color. It's not that I see grey,
because grey is a color, of course. But I see nothing. Or he'll say, well, I
know apples are red. So therefore,
they're probably red. And there are
patients like this. So what's
counter-intuitive for you, and I'll show you a second case. What's counter-intuitive here
to most physiologists, that in both cases, neurons
are not firing. But yet, you get
a different state. Now, this doesn't
violate physicalism. Here, the mental is totally
[INAUDIBLE] to the physical. But the difference is here, if
you want your synaptic output weights [INAUDIBLE]
set to zero by my experimental manipulation. And what this also shows
you is that it's not about sending
spikes to somewhere. Conscience isn't a message
that's being passed along here with a spike. It's a difference the
system makes to itself. The ability for the system to
make a difference to itself has been dramatically reduced by
your experimental manipulation. Here's a second experiment. [INAUDIBLE] talks about
it, the perfect experiment. It's actually not. It's quite imperfect. So here, you have
the opposite case. You have a red apple. And now, your neurons
here are firing. And their firing symbolizes red. And you go over here. And you're conscious of red,
and you see a red apple. Now, you do the
same manipulation. You introduce a halo
into these neurons here. So what halo does,
of course, when you shine the right light on it,
it activates a chloride shunt-- well, a pump, and
effectively it shunts out. So those neurons here cannot
influence a post-synoptic target anymore. Those neurons are just
fine, just as much as here. But the theory
says in this case, once again, you will
not see anything. Again, you'll get
the same symptoms of seeing no color, [INAUDIBLE]. Well, here you see
the color, the red. So it's a principle
in prediction you can test either by this
or using other ways of TMS. It's a little bit like the story
of Sherlock Holmes in "Silver Blaze." Remember when Sherlock Holmes,
when the inspector who's hugely clueless, Lestrade asked,
well, what's a critical clue? And Sherlock Holmes
says, the dog. And then Lestrade
says, why the dog? Sherlock Holmes says, well,
the dog didn't bark at night. That's a critical clue. And of course, what that
revealed to Sherlock Holmes was the dog didn't bark at
night, because the intruder was known to the dog. The dog could have
barked, but didn't bark, which is
different than if you had a dog that was
poisoned, for instance, because then he
couldn't have barked. Then the meaning
of the silent dog would have been quite different. So the important point here is
to say that conscience is not in the sending of messages. It's the difference
a system makes by generating spikes to itself. Let me come to an end here. So a question, particularly here
at the Center for Intelligence is so what difference
does consciousness make? Could it have been
evolutionary selected? So under this view,
under this reading, consciousness is a property
intrinsic to organized matter. It's a property like
charge and spin. We find ourselves in a universe
that has space, and time, and mass, and energy. But we also find
ourselves in a universe for organized systems that
have phi different from zero, have experience. It's just the way it is. We can ask, could we
imagine another universe? I could. I can also imagine physicists
occupied with a thought, can you imagine a universe in
which quantum mechanics doesn't hold? Maybe yes, maybe no. So apparently, no
physicist goes around and says, well, what's
a function of charge or [INAUDIBLE]? It just is. We live in a universe
where certain things have a positive or negative charge. But now, we find
ourselves in a universe where we have highly
conscious creatures. So the question is how
are we selected for it? And the answer is
integrated information is evolutionary advantageous
since obviously, it's much better rather than
having separate streams of information,
let's say auditory, and visual, and memory. It's obviously
much better if you can integrate that information,
because then you're much more easily able to
find coincidences and make informational
judgment on the whole. You can show that
in simple evolution. So I'm not going to go
in great depths here. We have simple creatures
that have a genome. We do artificial evolution. These are like
[INAUDIBLE] vehicles, except they have a genome. And early on, they
don't know anything. They have three visual sense,
a [INAUDIBLE] sensor, one bit memory. Oh, sorry. No memory here-- and then motor. They can move left, move
right, or move straight ahead. And you put them down
here, and you send them through these mazes. And you select them,
over 60,000 generation. You select the top 10%,
in terms of how have they gotten through the labyrinth. And you select the best ones. You mutate them using
various point mutation. You send them in
again, and you do this over, and over, and over again. And then what you can see, if
you do this for long enough, of course, the animals
adapt to their environment using our particular
selection function. You can see this nice
[INAUDIBLE] relationship between the minimum
phi So this is-- see, there's a lot of
redundancy here. So this shows you
how adapted they are. 100%, it's the person
who does the optimal at every single point in the lab
and makes the optimal decision. And so you can see, there's
this nice relationship between how adaptive
the animates are and the measure of integration
between the minimal phi. Because it's a large
degree of redundancy. So this is a simple
toy experiment, and you can make
more of them to show why it pays for an organism
to be highly integrated. So this would suggest
the driver for why we are high conscious
creatures is that it makes us much more
effective at [? taking ?] decisions. Now lastly, particularly
in a school like MIT, let me come to the point that's
probably most controversial and that many of you
are going to reject. Which systems are not conscious? Or which system is only
minimally conscious? So first of all, IT solves
a long standing problem with consciousness that
Leibniz talks about and that William
James talks about. Namely, the problem with
[? aggregate. ?] John [INAUDIBLE] also talks about it. Namely, there are 100
people in this room. Is there a super consciousness? Is there an uber mind? Many people believe that. Well, there's not. And the theory says
it's not, because it's a maximum over all grains, over
all spatial, temporal scales. So the idea would be there's
a local maximum here, and there's a local maximum
here within Tommy's brain. But there's no uber. That's no [INAUDIBLE]
of [? flash ?] Tommy. Now, what you
could do, you could do interesting thought
experiments that may be possible in the future. You can, for example, connect
my brain, Tommy's brain, with some sort of direct
brain to brain transfer where you enhance the
bandwidth between our brain. At some point abruptly what
the theory says, at some point, our brains will become
so interconnected that the phi of us
two, as a whole, is going to exceed the
phi of each one of us. At that point, abruptly,
my consciousness and his conscience
will disappear, and there will be
this new uber mind. But it requires a
causal mechanism. And likewise, if
you turn it back. So you could think about
the opposite experiment. You take a normal
brain, and you slowly, axon by axon, poison or
block the corpus callosum, the 200 million
fibers that connect the left and the right brain. What the theory
says that you have a single integrated
consciousness. But as you block more
and more, at some point, the local phi will exceed
the phi of the whole. At that point, the
big phi will abruptly disappear because of the
fifth axiom of exclusivity. You pick the maximum
one, and you will have two consciousness that appear. Feed forward systems have no
phi, and most interestingly, if you think about computer
simulations of brain. So let's say, we think of
Henry Markham's system. So let's fast forward
50 years from now. And we have a perfect
computer model that is all the dendrites
and all the synapses, and all the [? NNDA ?]
spikes, and calcium channels, and potassium channels,
and genes, and whatnot that's involved
in consciousness. And this computer
reproduces my behavior. Both input/output, as well
as at the causal level. And people would say, well,
clearly, it's conscious. No. The theory says, no. You have to look at actually
not what it's simulating. But you have to look at
its causal effect power at the relevant hardware level. The relevant hardware level
is a level of the CPU. And so now, you have
to actually look at the causal effect sector
of individual transistors. And we know a lot about
them because we build them. And so we know, for example, on
the ALU [? power, ?] typically, one transistor talks to three
to five other transistors-- gets input from
three to five other in the logical part of it. So its causal effect power
is very, very simple. It's very much reduced. And so the theory
says very clearly, this thing will
not be conscious. This computer
simulation, although it replicates all the behavior. So this really argues
against functionalism. Although the
behavior is the same, even at the level of
simulated neurons, the underlying causal
effect repertoire is not. It is similar to saying,
when I simulate a black hole, I can do that in great detail. We know the property of
mass had been space time. Well, space time, in
this computer simulation, will never bend the computer. Just like weather simulation,
it will never get actually wet inside the computer. Well, this is the same thing. You can simulate it. But simulation is not the same. So you have assimilated
input, output. But the machine itself
will not be conscious. In order to create
a conscious being, it doesn't require magic. It requires you to replicate the
actual causal effect structure. So you want to do
it neuromorphically. You actually want to replicate
the bi-lipid membrane, the synapses, the
large fan, in fan, out, in copper wire,
or light, or whatever. You have to do that. Not emulate it, but
actually build it. Not simulate, but
actually build it. Then you would get human
level consciousness. So of these systems,
only the upper left one would be conscious. So I think this is the
way we know the world. I only know the world,
because I'm this flame. That's how we
experience the world. And that's the only thing
I know about the world. And of course, we know
objectively speaking, the world is more like this. There are many, many
flames of other people and other conscious entities. And I think IT, it's
not any final theory. But I think it's by far
the best theory that has been out there 20 years. It makes a nice prediction
computationally. It makes a prediction
about the neural correlate. Its axiomatics. It's anything you want
about a scientific theory. In particular, it's predictive
power in non-intuitive places, just like Einstein's theory
early on predicted things like black holes, which
are totally non-intuitive. Finally, they were
born out by this. So yes, the theory makes
a number of predictions that you can find conscious
in very unusual places, maybe in very small animals. And you may not find it in
places where you think it is. Thank you very much. [APPLAUSE]
Good to see this science trending on this sub. I highly recommend Koch's book Consciousness: Confessions of a Romantic Reductionist for anyone interested in the field.
Nice to see IIT here. This sub needs more quality science lectures.
german here unwatchable because cringe accent can't focus other source anyone just kidding I know google
This isn't really a theory of consciousness, so much as it's a theory of awareness behaviors and information complexity. But, it was still a very interesting lecture.
Great, thanks for posting!
It's great to see science lectures appearing on the sub that fall somewhere in between the simplified popularised science ones and the highly technical ones, where the lecturers can transmit complex concepts in a way that a layperson with some knowledge of the field can appreciate.