Good afternoon. Good afternoon and welcome
to the 2019 Warren Alpert Foundation Prize Symposium. Today, we are celebrating
the achievements of four pioneers whose
collective work has propelled us closer to understanding
the ultimate enigma in human biology, the brain. The discoveries
that we honor today span the fields of genetics,
neuroscience, physiology, bioengineering, and beyond. And together, these
discoveries helped give birth to the
field of optogenetics, a revolutionary approach
that allows us to visualize and modulate neurons with
once unimaginable power and precision by simply
exposing them to light. The work of Edward Boyden, Karl
Deisseroth, Peter Hegemann, and Gero Miesenbock
has not merely transformed our ability
to see the inner workings of the brain, it has
brought us closer than ever to elucidating
some of the deepest secrets of the mind, secrets
such as the neural circuits that are involved in
decision-making and behavior, as well as those involved
in the development of neurologic and
psychiatric disorders. To be sure, the advent of
optogenetics as a discipline has stemmed from the collective
work of many scientists over some decades, but the four
recipients we're honoring today made pivotal discoveries
and developed critical tools
that have rendered optogenetics an indispensable
technique in neuroscience. Now, the idea of tinkering
with the nervous system to illuminate its
functions has actually tantalized scientists
for centuries. Think of the now classic
frog experiments conducted by the 18th century Italian
physician Luigi Galvani, who demonstrated that
information in nerve cells is in the form of electricity
of the electrical impulse. That notion that
light then could be used to manipulate the
nervous system is, in a sense, a centuries-old extension
of Galvani's ideas. In the 1980s, Peter
Hegemann set out to understand how green algae
and other simple organisms sense light. And in the 1990s,
Peter and his team identified and characterized
the light-activated molecules in algae that enable
them to respond to light. Peter then, along
with Karl Deisseroth, deciphered the key principles,
structure, and function of light-sensitive proteins. In 2002, the notion of
optogenetic neural manipulation became a tangible
reality thanks to work conducted by Gero Miesenbock. Gero demonstrated
that it was indeed possible to use light to
modify neural activity. Gero used light-sensing
proteins from the eyes of fruit flies and genetically
incorporated them into nerve cells. And the achievement
did not merely render neurons sensitive to
light, but also offered a way to control their
activity with light. Gero's work de
facto showed it was possible to use optogenetics as
a tool to study and manipulate the brain. Karl Deisseroth and his team
subsequently conducted a series of key experiments showing
the light-sensing rhodopsin proteins-- the proteins
first studied by Hegemann in single-cell organisms-- could be used to
activate neurons in the mammalian brain, which
Karl and his team discovered had the necessary chemicals to
make these proteins functional. Karl continued to
work on optogenetics, elucidating the structures of
several light-sensitive ion channels and discovering
multiple new optogenetic activators. All the while, he has continued
to use these tools to make fundamental discoveries
about the inner workings of the mammalian brain. Edward Boyden worked on
critical early experiments in optogenetics. He was part of a team, along
with Karl Deisseroth, Feng Zhang, and others, that in
2005 published a key discovery showing that the light-gated
ion channels of algae previously studied by Peter
Hegemann could be used to control neuronal firing. Ed then in his own
independent laboratory went on to refine
optogenetics, developing optogenetic activators to
allow independent control of multiple cell
types in the brain and used optogenetics to
achieve neuronal silencing. Collaborating with pioneers
in holographic microscopy, Ed developed tools that
used optogenetics as a way to impact exact patterns
of electroactivities on small groups of neurons
mimicking natural firings. Now taken together,
these discoveries have fundamentally
reshaped the landscape of modern neuroscience. They have set the stage for
optogenetics-based therapies that could one day be used
to restore vision loss, preserve movement following
spinal cord injury, or modulate the circuitry that
fuels anxiety and depression, and many other applications. Recognizing the people
behind this kind of transformative
science, science that carries the promise to
reshape how we understand, diagnose, and treat disease, is
the Warren Alpert Foundation's reason for being. We would not be here
today celebrating these momentous achievements
without the vision of the Warren Alpert
Foundation and its founder. Many of you will have
heard the history behind the birth
of the Foundation, but this rather mythical
story bears repeating. In 1987, Warren Alpert came
across a news article that described the work
of Sir Kenneth Murray, a British
scientist who had developed a vaccine against hepatitis
B. Somewhat impulsively, Warren put down the paper,
picked up the phone, and cold-called Murray. Now fortunately, Murray
answered the phone. And Warren Alpert
announced to him that he had won the Warren
Alpert Foundation prize. The little detail
that was missing, there wasn't the Warren
Alpert Foundation yet. So Warren got to work. He contacted then Dean of
Harvard Medical School Daniel Tosteson and asked
him to help convene a panel of experts that could
choose future award winners. And Dan said, yes. And here we are 31 years later. Over those three
decades, the Foundation has awarded nearly $5 million
to 69 scientists, 10 of whom have gone on to
receive Nobel prizes. I am thrilled today to have with
us members of the Warren Alpert Foundation's board of directors. I saw Fred Schiffman and
Gus Schiesser in the back. And I apologize if other
directors have shown and I haven't been
able to greet them. But on behalf of the scientific
community at Harvard Medical School and the world
over, I certainly want to thank the Foundation
for its support of science and discovery and for
its indefatigable efforts to alleviate human suffering. I want to congratulate the
winners of this year's Warren Alpert prize and to
express my deepest admiration for their
transformational achievements. Now, I'm going to
turn the podium over to our symposium
moderator, who is himself one of the preeminent
neuroscientists of our time, Bernardo Sabatini. Bernardo received his
undergraduate degree in biomedical
engineering from Harvard. He went on to earn a
PhD in neurobiology, as well as the MD degree
from Harvard Medical School. He pursued postdoctoral training
at the Cold Spring Harbor Laboratory. Bernardo is a Howard Hughes
Medical Institute investigator, a member of the National
Academy of Sciences, and currently, the
Alice and Rodman Moorhead III Professor
of Neurobiology in the Blavatnik Institute
at Harvard Medical School. Bernardo and his team seek to
uncover the basic mechanisms that underlie brain plasticity,
a critical feature that allows mammalian brains
to acquire new behaviors, to learn, and to adapt to
new cognitive challenges. And the ultimate goal of
Bernardo and his team's work is to define the perturbations
in these processes that can give rise to neurologic
and neuropsychiatric disorders. Please join me in
welcoming Bernardo. Bernardo. [APPLAUSE] Thank you, George, and
thank you, everybody, for coming to this wonderful
occasion in which we're going to celebrate the arc
of discovery and invention that led to the field
of optogenetics. And as Dean Daley
just mentioned, the excitement in optogenetics
for the neurobiology field is that it brought
systems neuroscience into a modern era in which
causal experiments suddenly became possible. And so from the times when
scientists first put electrodes into the brains of
animals, they found that there were
neurons that reflected very specific features of the
environment of the animal, of the state of the
animal, or of the motor action of the animal. And these beautiful
studies over many decades led to precise theories
as to how the brain could perform computation, store
information, or generate motor action. But the problem was that
all of these theories were very beautiful. In most cases, we lacked
tools to really test them with high precision. And optogenetics has
given us the kinds of gain and loss of
function experiments to control neural
activity that allows us to test if these
theories are right and if the activity
of particular neurons is necessary and sufficient
to explain parts of behavior and signaling within the brain. Now as Dean Daley
mentioned, it's been decades that
people have wanted to use light to
control neural activity and the literature is littered
with many failed attempts to do so. And there's a very nice
essay from Francis Crick, the codiscoverer of
the structure of DNA, that he gave the Royal Society
in the early '90s, in which he talks about how one
would like to be able to manipulate the activity
of cells in the brain remotely. And he said, the ideal
signal would be light. This seems rather far-fetched-- sorry-- this seems
rather far-fetched, but it is conceivable that
molecular biologists could engineer a particular
cell type to be sensitive to light in this way. And so the four people
that we're honoring today are the ones that took that idea
that was deemed far-fetched, and made it a reality. And what I like about
the story of optogenetics is that it brings
together science that's done by many
different people with many different styles. And so we're going to see
examples of a biophysicist who wanted to solve a problem
that he encountered in nature, pure curiosity-driven science
to understand how unicellular organisms detect and
respond to light. We'll see other
examples from somebody who wanted to solve problems
in his own laboratory. He wanted to drive his
own research forward, had roadblocks, and
invented technologies to get beyond them. And we'll also see
examples of people that took an idea and almost
on an industrial scale, created dozens and dozens
of permutations of that idea to push forward a field and
endowed with us the tools that we need for optogenetics. So very different
kinds of science that all came together
to create this. Now, I've told you a
little bit about why I'm excited about
optogenetics, but I want to spend just a
couple of more minutes on why you should be
excited about optogenetics and why the Alpert
family should care. And George touched
on these things. And there are
really two reasons. One is that optogenetics
has allowed basic discovery into the brain
that's now giving us, we hope, new ways to treat
neuropsychiatric disease. And so we've been
able to identify cells in the brains of animals
that makes them continue to eat, for example, even
though there's no caloric drive, and hence might be relevant
to obesity and diabetes. We found cells in the
brain that mediate anxiety, that mediate feelings
of anhedonia, other cells that exacerbate
or can correct the symptoms of Parkinson's disease or that
drive forward processes related to Alzheimer's. And all of this work has
led to a new appreciation within pharma that
in order to treat neuropsychiatric
diseases, one might have to target circuits instead
of pursue specific molecules. And I think that
that's going to be the wave of therapy for
neuropsychiatric disease in the future. Second, as George
mentioned, in the future, it's quite likely that humans
with optogenetic manipulations of their brain will
walk around, and they will use these manipulations
to correct perturbed patterns of activity within the brain. There are already
trials going on in which optogenetic
actuators are put in the eye to restore light sensitivity
to the retina of individuals who have lost their own
endogenous light-sensitive cells. So today, we're going to
hear from four leaders that we've chosen
within this field, who over decades in their
own laboratories have driven this field forward. Some, it turns out, have done it
somewhat accidentally by, as I said, studying
natural processes, and then realizing the
importance of what was here, continue to drive
the field forward. And other ones have
made it their mission to simply solve this problem. We have a jam-packed
day, so we're going to try to keep on time. We're not going to do
questions and answers, so you'll have to find
the speakers at the breaks if you want to talk to them. I'm going to keep the
introductions short. I'm not going to list
the hundreds of awards that these four
have collectively won because today, it
only matters that they won the Warren Alpert prize. So our first speaker
is Dr. Peter Hegemann. He is the Hertie Professor of
Neuroscience at the Humboldt University in Berlin, a
very storied institution that's made amazing
contributions over the last century. And he has really worked
on optogenetics, I think, his entire scientific career. His thesis work was
entitled "Purification and Characterization of the
Functional Chloride Pumps-- Halorhodopsin," one
of the key molecules that we still use
today to manipulate the activity of cells. His first independent group
at the Max Planck Institute in Martinsried was called
photoreceptors of microalgae. So basically, his
entire life, he's worked on this problem of how
unicellular organisms sense light, detect light,
and react to light. And through his
curiosity-driven research, he has found, and characterized,
and eventually identified with collaborators the
central proteins that became the first wave of
optogenetic activators that made all this possible. Peter, I look forward to hearing
the story from your side. Thank you. [APPLAUSE] So dear Dr. Daley,
dear Dr. Sabatini, thank you very much for the
kind words of introduction. And to stand here is
something really particular and a great honor for me. And I'd like to express my
deepest gratitude to the Warren Alpert Foundation,
especially to the selection committee and all the members
that are involved in it. So today, my short talk,
the first 15 minutes, I'd like to guide you
through the history. And I don't want to bother you
with all the biophysics we do. In the second part, I'd like to
show you some examples of what we are doing now. So the first slide which you see
here is the Brandenburg Gate. And this is a sign for
separation of the country. And on the other hand, it's
also a symbol for reunification. And it was my pleasure to
be here on the October 3rd because it's exactly
the day when Germany became united 29 years ago. So my first conclusion
is walls never help to solve any problem. And so my second statement is
when you start a new research project, it should
start with a wonder. So you should wonder about
something you see which you cannot explain. It can be a natural
phenomena or it can be a disease
which is completely unknown or unexplored. So here, you see that wave-- the orange waves--
that occasionally occur in the ocean, sometimes
also here close to Boston. Or if you go to
Northern Territories, you see you red snow. And this is called
watermelon snow. And the question is
how is it caused? And the reason is green
algae or algae of the ocean are responsible for
that, like these algae. And this is Alexandrium. This is one of the most toxic
organisms you can imagine. And if you keep them
in your laboratory, you need a good aeration,
otherwise you die. And this is
Chlamydomonas nivalis that is responsible for
the watermelon snow. And it's also the reason why
this is called watermelons because it tastes sweet. So my laboratory is working on
a relative of this Chlamydomonas nivalis. And I forgot to say
the last sentence. The real pleasure in
science is to work on something, the outcome
of which is totally open. So we work on Chlamydomonas
reinhardtii, which is the green model organism. And when we started to study
the behavior, after some years, we realized that it's not
new at all because there was a publication
in 1866 by Andrei Famintsyn, a Russian scientist. And he described
in this article, the behavior of Chlamydomonas. And this is written in German,
published in a French journal, and edited by Saint Petersburg
University in Russia. And he used an assay,
which you see here to the bottom-right, this
population of Chlamydomonas. And you shine light on one
side, and they move away from the light. And he asked already, what are
the conditions under which it moves to or away from light? This person became
famous because he established a botanical
institute in Moscow University later on. And he didn't have the time
to work on algae unfortunately anymore. The other two people
I'd like to mention is the physicist Ken Foster-- he was a postdoc in
Mike [INAUDIBLE] lab-- and a chemist Koji
Nakanishi because they worked on green algae
many, many years later. And they worked
on a wide species that were unable to produce
carotenoids and chlorophyll. And they found that these
are not phototactic. And they added
vitamin A, and they realized that they
could increase the sensitivity of these algae
by three orders of magnitude within one minute after
addition of this vitamin A. And that was the
starting point when I got interested in the work. And I spent some time
in Ken Foster's lab. And his lab was totally chaotic. You couldn't do anything, but he
was inspiring on the other hand as well. So we studied this
species for a while, and then realized
it was strongly dependent on the
ionic conditions as already Famintsyn
said 100 years before. And we tried to establish
electrophysiology. And we were heavily discouraged
by the Chlamydomonas community, because they said it will never
work, except one person, Ursula Goodenough. Where is she? She should be in the audience. Oh, there she is,
Ursula, wonderful. It's great to have you here. And she was the only
person in the community who encouraged me, and provided
a cell wall deficient algae, which allowed
Hartmann Hart in my group to establish electrophysiology. So he sucked the cell
into the pipette, and applied a short flash. And what he noticed
is that there is a fast photos at
the current, which means ion influx into the
eye, and a slow flagella current, which is influx
into the flagella. And he measured the wavelengths
dependent, and demonstrating that it is a rhodopsin spectrum
with 500 nanometer maximum. And we concluded that
these photocurrents are mediated by rhodopsin. We learned over the years to
record directly from the-- oh yeah, thank you-- directly from the eye by
using a pitched pipette, and this greatly improved
the sensitivity and the time resolution. And allowed us to record
with a better time resolution, as you see
here, and to conclude that there's no delay. And from these and
other measurements, we concluded that the
rhodopsin the ion channels are directly coupled. And a few years later, we
made another conclusion that they form a light-gated
ion channel together. And the channel conducts
proton and calcium. And it's interesting that we'd
never talked about sodium, because this algae live
in absence of sodium, so quite different from
the neuroscience situation that you're facing now. And these are one
million charges in one of these photocurrents. And we concluded
that the conductance is about 100 femtoseconds
which is very small, and almost exactly the number
that we know in these days. So we also noticed that
there is a low intensity range and a high
intensity range, so they have sensitivity
dynamic range of four log unit. And so we established
this model. There is a light-activated
ion channels responsible for the upper
light area from 1 to 100%, and a lower region, where
it needs some amplification. And this has not been
completely solved. We also measured
flagella beating together with the photocurrents,
and have demonstrated that the appearance of this
action potential, which is so important for the later
measurements, that these cause the switch from forward
to backward swimming, so the trigger for completely
different behavior. Then in parallel, we
worked on the purification of the photoreceptor by using
Karen Foster's mutinant, reconstituting it with
radioactive retinal, purified the most
abundant retinal protein, have shown that these
in the eye spot, and proposed that this is a
light-activated ion channel. No response from the
community at all. After five years
later, we got a-- four years later, we got a
mail from this gentleman. And he ask us, "I have been
interested for some time in potential methods by
which mammalian neurons might be transfected with
genes whose product would permit light-triggered and
depolarization of action potentials." That was a very interesting
conversation that we studied. And probably most of
you know that this is Roger Chen, who became
famous by the GFP studies, and he failed. And the reason why he failed
is we sent him the wrong gene. [LAUGHTER] So at least, we have
shown it ourselves that it was the wrong gene. The abundant retinal
protein in Chlamydomonas is not the photoreceptor
we were looking for. So, but, the on-off clearly
showing the way to go remains yours. So this is my tribute to Roger
Chen, who died much too early. It was always wonderful. He has no social behavior. When you met him, he continued
the discussion at the sentence where we stopped three
or four years earlier. [LAUGHTER] So then Sunir Katarya
joined my lab, and he discovered in a Catoosa
library, the first cDNA library for Chlamydomonas,
two new genes, and they were related to
rhodopsin to some extent. And they showed
a seven transform in helix domain, and a long
cytomal end, which is about 40% of the protein. And we decided to
express it in oocytes, but didn't have the oocyte
method established in the lab. So we teamed up with
Georg Nagel in Frankfurt, and expressed this
protein in frog oocytes by using two voltage
clamp experiments. And we have shown
immediately, after three weeks or so, that these are
the photocurrent that we flagged in the photocurrent
of Chlamydomonas. So it was immediately
clear that this was the protein we are looking for. These are the
original experiments with longer illumination times,
with channelrhodopsin one, which shows a very
strong pH dependent, and a model at cation dependent. And we called this
protein channelrhodopsin, because they unify an ion
channel and a sensory unit. And there are two of them,
channelrhodopsin one and two. So why does Chen work
with channelrhodopsin one unsuccessfully? Because the currents
were too small. And then he didn't find
a person in his lab to work on channelrhodopsin two. So then after this
experiment, we expressed this in
human embryo kidney cells, the channelrhodopsin two,
because the channelrhodopsin one showed small
photocurrents only. And the next thing
is, what we found is that the seven
transform in helix fragment is enough to trigger
this photocurrent, and the 60% of the
protein are unnecessary. So we had a very
compact, small system which is a sensor
and the ion channel together, so channelrhodopsins. And the conclusion was
that channelrhodopsin can be functionally
expressed in animal cells. So this conclusion
stimulated many, or a number, not many yet,
five laboratories, basically, to work on this. And the first
publication came out where Karl Deisseroth
and Ed Boyden, they demonstrated that this
works in hippocampal neurons, and you can fire
trains of light, and the response that you get
is a train of action potentials. The next person was
Hiromu Yawo in Sendai, he demonstrated that it
works on brain slices. And this might be forgotten. And the third person
was Stefan Herlitze. He has shown that this
functioned then in an animal. And the first animal
was not the mouse. It was a chicken embryo. And the third person was
Alexander Gottschalk. He produced a cell
line which continuously allowed channelrhodopsin
to manipulate neurons. And the last person
is Zhuo-Hua Pan, who demonstrated
it in blind mice, that it can reconstitute vision. And he was also pretty late,
so he has not been in the focus so far. But he belongs to
the key people. And then later on, as
you know, Karl and Ed have taken over the
field, and delivered all those modifications
and other things. And certainly, meanwhile,
it's also heavily expressed in zebrafish and
drosophila in the mouth, and you'll hear
more about it later. So the technology is
relatively simple. You take DNA from a
microorganism, Chlamydomonas, for example, connect it
with a promoter region from the cell of interest,
pack it into a virus, inject the virus into
the brain, and then you wait for a couple
of weeks, and then you can replace a needle
for the light guide, and then you can study the
behavior, among other things. So what you need
is a photoreceptor that is small and
genetically encodable, a promoter element, a
chromophore which is present in sufficient amount. And this was my
biggest surprise, that the brain contains retinal
in sufficient concentration to reconstitute the
opsin efficiently. And certainly, response
you can interpret, and probably many responses
that are essential for a living organism in the wilderness,
is not so easily identified in a mouse in a cage. So the specificity
is a major issue, that you can target
a single neuron. And in parallel, you can
target another neuron with another actuator
or inhibitor, and then you can study
learning and memory, and sleep, and locomotor
activity, and feeding, and certainly, since recently,
vision, hearing, sexuality, autism, addiction, anxiety,
Parksinson, and so on. And Karl and Ed will
speak about this. So what remained for us? So we went back to our
original starting point, that we wanted to understand
the photoreceptor. And this is the
current knowledge we have about the
channelrhodopsin. This is mainly based
on mutagenesis studies and biophysics, and
also on the exostructure that has been provided
by Osamo Diwaki in Japan. And he used a hybrid which
was originally designed by Hiromu Yawo in Sendai. And certainly MD
calculation that tell us where the water is most
likely in the channel. And you see here the
retinal chromophore which provides the
light sensitivity, and green amino acids that are
responsible for color tuning, and the brown amino acids that
are responsible for conductance and ion selectivity. And we mutated all of
them, and know more or less what they are doing. But the key elements
of the protein are the gates, the central
gate and the inner gate, which are closed in darkness. And you should keep in
mind that the image we have is a dark state of
the channelrhodopsin, so it means that it's closed. And what we still
need is certainly information about
the open state, which is not available at the moment. Also I'd like to bring
to your attention that in contrast
to other proteins, the sensory photoreceptors
are highly dynamic. So they undergo thousands
of conformational changes after light absorption,
and only a few of them are detectable as
absorption changes, because they have relation
to the chromophore. And this can be monitored by
a changing of the absorption wavelengths. This is 470 of the dark state,
and then 500, and then 390, and 520. And this is a main
conducting state. And then it decays
the conducting state in 10 milliseconds, and it
reverts to the dark state only on a seconds timescale. But most interesting part
is not highlighted here, which is the initial state. And by using pump
probe experiments, we studied them in detail
together with Johann Kennis. And if you excite the cells
from the electronic dark state to an excited state, the
conformational changes occur on an energy
landscape to a minimum. And then there is a dissection,
clinical intersection between the excited
state and the dark state. And here the decision is made
to return to the dark state, or to go into the
photo cycle product. So this is a very central
point for the efficiency of this rhodopsins. So decision is made on
a picosecond timescale in a range of 10 to the minus
12 seconds after the flash. Everything else, what comes
later, is a dark activity. So we certainly looked
at the chromophore to understand the system,
and also to manipulate it in a sense that we can use it. And that was done with
Karl many years ago, and with Ofer
Yizhar, his postdoc, who is now at the Weizmann,
and will come tonight. And I gave you a few examples. So if you mutated this residue,
you get larger currents. If you [INAUDIBLE]
this residue, you will get a shorter
open state lifetime, but more importantly, you
remove the voltage sensitivity of the protein. And that allows you to fire
action potential with higher speed, higher frequency,
to study, for example, interneurons. And the third example
is this [INAUDIBLE] and if you mutated
it, you slow down the photo cycle tremendously. And it goes from 10
milliseconds, to 100 seconds or so. And this can be used for
continuous depolarization. So here is one example. You apply blue light. You excite the cells. It fires action potential. And then you apply green light. Then it goes back two states. So this step function
rhodopsins were very useful for future experiments. But also the current
itself was very difficult in a different channelrhodopsin. For example, this
one is inactivating, and it's an invert rectifier. And this is hyperactivating
during a light flash. It's again invert rectifying. And here's another species
which is not invert rectifying. And here is another species
from somewhere near Hawaii, recently discovered. It completely inactivates
in continuous light, for whatever biological
reason, we don't know. So then these
properties are clustered in different
evolutionary branches. And also, what came out,
that the color tuning is very, very important. So we can collect
different organisms to address different cells. So one important experiment
or one important question is shown here, the
question of inactivation. And this is a typical
biophysical question, because it requires
a deeper insight. So if you look at the
photo cycle again, I have shown you the
basic photo cycle, which is shown here again. And we recently found that there
are two conductances, an early and a late conductance. And the first one, the early
one, is proton-selective, and the second one
is sodium-selective. And depending on
the equilibrium, you get a more proton-
or more sodium-selective photocurrent in your
neuroscience experiment. Alternatively, so more or
less competing with a third is this isomerization you
get an soon-anti or anti-soon isomerization, and it produces
a second dark state, which started its own photo cycle. And the open state
in this photo cycle is only weakly conductance, and
is more sensitive for protons. And this is the reason
why in steady state light you get this reduced
steady state level. This one here. So the system is
more complicated than probably, as an
applicant, you might imagine. And the question is how can we
manipulate this selectivity? And we identified some
years ago two key residues, one in the inner gate and
one in the central gate. And if you replace
these glutamate, which is conserved in most of
the channelrhodopsins, you can manipulate the ratio
between proton conductance, which is shown in red,
and sodium conductance, which is shown in green here. And then you can
combine certainly these different mutations to
divide the two further in one direction. But you can also look at the
y type channel rhodopsin. This is Johannes [INAUDIBLE]
done, and compare the proton and the sodium selectivity. And you might end up
with a PsChR, which is almost exclusively
sodium-selective, or you should look at the
Chrimson and CsChR, which is almost exclusively
proton-selective. So only at neutral pH and low
sodium, you get a current here. And at alkaline pH, you get no-- you get no current. So what we engineer
in the lab, nature has done billion years before. You only have to find it. This is a problem. If you don't know
what to look for, then you don't get anything. And Ed's lab has, for
example, identified this Chrimson, which is very
interesting for many reasons. It is almost purely
proton-selective, so you get, at neutral pH and
low sodium, large current, and no current at alkaline pH. But if you mutate this, this
glutamate, at this position, you convert it into
a sodium-selective channelrhodopsin. And what we concluded from this
is that this selectivity photo is in this Chrimson at a
completely different position, so close to the surface,
whereas the central gate is not important at all, and not
existing in this variant. So nature had developed
many different possibilities how to control conductance
and selectivity. And if you look at the crystal
structure, you see the reason. So the water pore
in this Chrimson is blocked at this position. Whereas it's a free flowing
water in channelrhodopsin from Chlamydomonas. So unfortunately,
we have not been successful to produce
a potassium-selective channelrhodopsin. And this is on the
list for a long time. And Karl and I came together
again to work on it. But as long as this is not
finished, we made a compromise, and established two
component optogenetics. And in this case, we combined a
soluble photo-activated enzyme, which is a cyclase that produces
cyclic AMP, and that allowed us to activate a cyclic AMP
activated potassium channel, which can induce
hyperpolarization. Alternatively, we recently
worked on a rhodopsin cyclase, which is also a rhodopsin
with an unusual tail, and this tail is an
enzyme directly coupled to the rhodopsin,
never found before. And this produces
cyclic GMP, and we can use cyclic GMP-activated
potassium channels to hyperpolarize the cell. And here's one example
by [INAUDIBLE],, a postdoc in my group. She used this blue
light-activated photoreceptor enzyme, and combined it with
a small potassium channel from a bacterium. And she got nicely
hyperpolarization which was very efficient. And due to this
amplification, you can use it in very
low light intensities because it drives 10,000 charges
after one photon absorption, which is much more,
certainly, than the pump, for example, that only
transport a single charge, or an ion channel like
Chlamydomonas which transport maybe between 10 and 20. So this is, I'd like to
show you a few examples. Francisca Schneider, a
former student from my lab, she now has a group working
on cardio optogenetics. And she tried this, and was able
to inhibit action potentials from the heart cells, and
also to inhibit heart beating in her model systems. So it works nicely. And here's an experiment
in parameter neurons. And you see here the
marvelous expression. And here is a small
blue light flash, and it causes a long
hyperpolarization in these cells. And certainly would be better,
and probably more comfortable, to use cyclic GMP
instead of AMP. And therefore
recently, [INAUDIBLE] she established in my lab the
functionality of the rhodopsin cyclases, interestingly,
identified by a theoretical physicist. She became unemployed
and moved to biology, and she discovered
with her friends these rhodopsin cyclases. And this is good for
controlling cyclic AMP, and also for
hyperpolarizing cell. So these are only
a few examples. And I'd like to summarize,
the major player is still the light-activated
cation channel. What we are still missing is
a potassium-selective channel. It has been complemented by
anion channels and pumps, and also, since some time,
light-activated enzymes that complement the ion transporters. So this is still
the major player. I could continue, but
in the favor of time, I'd like to finish. My conclusion is algae have
taken over the brain research. And if we continue to
destroy the climate, they probably will take over
the planet, and control it. What they have done
over 3 billion years. And I still have hope
that will not happen, that the human species will
also survive for some time. And this is my group. And I'd like to express
my gratitude to, many thanks to all
my co-workers I had the privilege to work
with, to my photoreceptor and neuroscience
friends and colleagues, and to the Chlamydomonas
community where all the business have started. And certainly, I
had collaborators over the last three decades, and
a few I would like to mention. First of all, Karl, who
worked with us for the last, probably, 12 years or so. And he transferred
all the knowledge to the neuroscience
committee, and he had enough and a profound
molecular knowledge, so it was always a
pleasure to work with him. Thanks, Karl, for that. And Ofer Yizhar, is
former student, now a group leader at the Weizmann. He established optogenetics
at the Weizmann Institute, and Georg Nagel,
who worked with us in previous times, and
certainly many spectroscopies and crystallographers. And I'd like to thank
you for your attention. [APPLAUSE] Thank you. Close. OK. Thank you so much for
that wonderful talk. So we have two halves to
the presentation today, and in each one, we have two
of our award winners speaking. Interspersed
between those talks, we have two postdoctoral
fellows from laboratories in the Department of
Neurobiology at Harvard. And so these are talks
by junior scientists who are using optogenetics
in their own research to advance the inner
workings, in this case, of the mouse brain
for both of them. So the first of
these talks is going to be from Kimberly Reinhold. She did her undergraduate
work here at MIT, and then went to University
of California in San Diego to do a PhD with
Mossimos Kanziani, and then I was lucky enough
to have her join my laboratory here on the quadrangle. And she's going to tell us about
her work using optogenetics to both activate
and suppress neurons in the brain to undercover how
a mouse learns a new skill. Kim? [APPLAUSE] Thank you. It's a real pleasure to be
able to share a vignette of how we apply optogenetics. When I was in college
I was required to take a PE class, a
physical education class. So I signed up for squash
because I had never before attempted a racquet sport. And the first day
I showed up and I tried to hit the
ball with the racquet and I was very, very bad. The instructor sent me to a
court by myself to practice. And I did, I practiced. And I attended all the
classes that semester. And at the end of
the semester I still couldn't hit the ball
with the racquet, but other students
seemed to learn. Squash is an example of how we
learn through trial and error. We learn to associate
sensory inputs, like the ball flying at my head,
with appropriate motor outputs. Maybe swinging the racquet,
or in my case, running away. And we learn these associations
through practice and feedback. Trial and error learning
is a fundamental component of many different
cognitive processes. Therefore it's vital we
understand where in the brain and how it occurs. What do we know? Well, we know that people
with Parkinson's disease have damage to the basal
ganglia, a set of nuclei deep in the brain
outlined in green. And we know that these people
are impaired in trial and error learning. These people can learn
things like episodic facts, people's names, the time of
day when something happened. But they have deficits,
both in motor learning-- like the squash example-- and also in purely cognitive
trial and error learning tasks. Interestingly,
people with damage to a different part of the
brain, the temporal lobe, are amnesics. So these people
can't learn things like the color of the
experimenter's clothes, but they can learn,
practice-based tasks, trial and error tasks. And so we see a
dissociation that suggests that the basal
ganglia specifically support trial and error learning. And this has been confirmed
in a number of model species. To figure out what
goes wrong in disease, it's important we understand how
trial and error learning works in a healthy brain. So today I'll tell you about
our work to try to do this. To try to nail down more
precisely where in the brain trial and error learning
is computed, and how. First I'll explain our approach. We've developed a
task in which mice learn through trial and error. There are two stages-- learning, and then execution
of the learned behavior. I'll ask if basal
ganglia are needed after the mice have learned. Whether basal ganglia are
needed during learning. And if they are, how? To ask the question, are
basal ganglia needed, we'd like a way to shut
off these structures and look for
effects on behavior. Diseases and stroke do this,
they shut off brain areas. And lesion studies do the same. But those changes are rarely
specific to a single brain circuit, and unfortunately
irreversible. There are other kinds of changes
we can impose in the brain. And these have higher spatial
and temporal precision, but what we're
really looking for is a technique with
high spatial precision that also has excellent
temporal precision. For example, able
to probe really fast cognitive processes,
like the learning update that happens between swings
of the racquet happening on seconds. And here, optogenetics
fills the gap. Let me first tell you
about the task we use. Mice, like humans,
have basal ganglia that receives sensory inputs
and project to motor outputs. To study this area
we've designed a task where mice learn
through trial and error to associate a sensory component
with a motor component. The sensory component,
while we could have chosen an
external stimulus, like a flash of light,
which would activate the eye and a number of
visually responsive areas throughout the brain-- many of which themselves
project to the basal ganglia-- now we have a lot
of active brain areas in different
regions, and it becomes difficult to follow
the flow of neural activity through the brain from
start to motor output. So we decided to play a trick. We restrict the cue to be the
activation of just one brain area, the visual cortex. This is well
studied, accessible, and has a specific projection
into the basal ganglia. We can use viral
and genetic tools to express an optogenetic
protein specifically in these neurons,
with cell bodies in the cortex that send their
axons into the basal ganglia. And then we can implant a fiber
through the skull of the mouse, shine blue light through
that fiber into the brain, and that will activate
the optogenetic protein channelrhodopsin, which
you heard a bit about. These cation channels. And that causes the cell
to depolarize and fire action potentials. So we turn on neural activity
in the specific population. The motor component of the task
is a reach to a food pellet. We make the mice
do this in the dark so they can't see the food. And we've ensured that they also
can't smell or whisk the food, so they have to use the
forearm and the forepaw to see if the food
pellet is there. So the task proceeds like this. The mice reach a lot. Often there's no food
there, they're just hoping. The optogenetic cue turns
on, and then they reach and there is food. So they have to learn
through trial and error that the optogenetic
cue predicts food. Mice learn to do this. Here I'll show you a movie where
pellets come into position. Let's see if I can
find a pointer. Thank you. So you'll see that pellets
come into position. And then the cue, the
optogenetic cue turns on. So keep your eyes on this blue
circle when the light flashes. That means we're stimulating
neurons in his brain. So the cue's about to turn on. There, it turned on. And he's learned that that
means the food is now available, so he reaches and
grabs it and eats it. Remember, this is
happening in the dark, so he can't see the pellet. We're spying on him
with an infrared camera. The light here is just a
flashing blue light that has nothing to do with the cue. And it's just to ensure
that the mouse doesn't reach the flashing lights. And we have a variable interval
between pellet presentations to make sure he's
not counting time. So he's learned this task. And we can plot his behavior
across many cue presentations or trials on the y-axis versus
time in seconds on the x-axis. And then put his
reaching onto this, and see that sometimes
he successfully grabs the pellet, other times he drops
it, sometimes he misses it, and often he reaches and
there's no pellet there. But what's important is that
when we add up all the reaching across all of these trials
and plot that as a histogram, with reaches on the y-axis
and time on the x-axis, we see this huge increase in
the frequency of reaching right after the cue. And this tells us that he
has learned the association between cue and food. We've taught a number
of mice to do this. Here's the mean
and standard error. And we think that these
animals are paying attention to the optogenetic cue,
because when the cue turns on, they reach. This is the same thing I
showed you on the last slide. But when on a random set
of trials we omit the cue, the mice don't reach, even
though the pellet is there. And when we omit the pellet
but the cue still turns on, the mice do still reach. So it seems that animals
can learn this cue response association. Are the basal ganglia needed? The basal ganglia
are a good candidate to link the cue to
the motor output, because these structures
receive diverse sensory inputs and project to motor outputs. And because there is a direct
pathway from cue activated cortex into the input
nucleus of the basal ganglia called the striatum. And I now want to focus on this
input nucleus, the striatum, and in particular,
the subregion that receives that input
from the cortex, that receives that cues signal. This is the dorsomedial
tail of the striatum. And for the rest of the
talk when I say striatum, I'm referring to this
part specifically. Does striatum link the
cue to the motor output? Does it trigger
the motor output? Well, if this simple
linear model is correct, then we should be able to see
a change in neural activity here around the time of the cue. So we test this by
recording neural activity in the striatum. We can acutely implant
recording electrodes into the animal's brain and
record the activity of neurons. We see action potentials
here, or spikes. And then we can just draw a
line every time we saw a spike. And represent a neuron's
activity in this way, where the different rows are
different cue presentations, and the x-axis is time. So you see that this
neuron has some activity around the time of the cue. And we've found a
number of such neurons which seem to show changes
at the time of the cue. So this might be
triggering the reach. Second, if this linear
model is correct, then when we shut off this
striatum, the animals should no longer be able
to do the cued reach. So we have an optogenetic
approach to do this. There are output
neurons in the striatum. But there is a second general
class of neuron in this area, and these are locally
projecting inhibitory neurons. So we can put a red
activatable optogenetic protein into these cells,
shine red light through two bilaterally
implanted fibers, basically turn on these
inhibitory neurons, and they act to shut
off the neurons that project out the output neurons. And so essentially
what we're doing here is performing a spatially
and temporally precise loss of function, where
we're shutting off specifically the part of
striatum that gets the cue input, and then
that sends output to the rest of the brain. And we want to know whether
that output of striatum is needed for the cued reach. Importantly, previous
work has shown that inactivating this part
of the brain using drugs doesn't paralyze
the animal's arm, so the animal can still move. What does red light
do to neural activity? Let's take a look at this
cell that we saw before. What we find is that
turning on the red light prevents spiking in this neuron. And this is an inactivation
lasting a second. But at the end of the red
light, activity comes back. So unlike a lesion,
this is reversible. We can suppress the activity
of the cue responsive cells. You can see right here. And across all of the
striatal projection neurons that we recorded, we
see about an 86% reduction in activity. So the red light
suppresses striatum. We can turn on the red light
on a random set of trials at the time of the cue and
ask if the mouse can still do the cued reach. So here's the cued reaching
you've seen before. And now we want to
know what happens when we eliminate striatum. Can the mouse still
do that cued reach, or is cued reaching gone? What we see is that
the mice are perfectly able to do the cued reaching. And in fact, there's no
change in reaction time, and there's no change
in the animal's ability to successfully grab
the pellet and eat it. So we see no motor
deficits at all. So it seems that striatum does
not trigger the cued reach after learning, and there
must be some other brain area that serves this function. We don't know yet. We have some ideas. It turns out these neurons
that project to the striatum also have collaterals to
the thalamus, the pons, and the superior colliculus. And we're investigating now
whether one of those areas might be the link. But I began the
talk by telling you that the basal ganglia are
really important for trial and error tasks. So maybe the basal ganglia
are needed during learning. In order to test this,
we have to have a way to measure learning. The mice make cued
reaches, but they also make spontaneous reaches
before the cue even turns on, just hoping there
is a pellet there. And we would expect
that learning involves an increase
in cued reaching, plotted here on the
y-axis, and a decrease in non-cued reaching,
plotted on the x-axis. And this is what we see. Here's an example learning
trajectory from a single mouse. You can see it
from the first day to the last day the animal
increases cued reaching. And there is a little decrease
in the non-specific reaching. So we can plot the direction
of this learning change by putting on a vector from
the first day to the last day. And then across mice
we see that all animals learn by modifying
behavior in this way. Interestingly, when we shut
off the striatum for one second on every presentation
of the cue, animals do not show the
normal pattern of learning. Their behavior is disrupted. And here are the averages
for those two groups. And so it seems the striatum
is needed during learning. We can plot this data
in a different way and combine the x
and y-axes by asking, how much more cued reaching
does the mouse do with respect to non-cued reaching. And this is a metric
we call D prime. It doesn't matter, it's just
a way to quantify learning. And I'm going to plot it
on the y-axis, and then the day of training on the x. To give you a sense
for what this means, low learning values-- learning index values--
mean no cued reaching. Middle values, the mouse is
starting to do cued reaches. And at high values,
the animal is performing really
stereotyped cued reaches. So animals with the striatum
intact learn this task. But when we inhibit
striatum, mice don't learn. Maybe my striatum
was asleep when I was trying to learn squash. Importantly, we
haven't permanently damaged the mice
in this red cohort, because we can perform
a recovery experiment. So we stop the manipulation,
and now these animals have the striatum
intact, and they are able-- the same
animals-- are able to learn. Maybe hope for my
squash game yet. So it seems that
striatum is needed to learn the cued reaching. Can we get any idea of how? One hypothesis is
that the straight m might get information about
the outcome of the animal's behavior in the past,
and use that feedback to update the animal's
behavior in the future. And that could happen at
different time scales. This updating could be
fast, as in the case of the squash racket swing. Where you swing,
you're not very good, you swing again,
you're a little better, but you're still not very good. And that update is very rapid
on the timescale of seconds. Or we can imagine a student
cramming for an exam. Learned a bunch of information,
but that doesn't really get into memory until the
little power nap begins. And maybe here the
update is on a timescale of minutes to hours. So if we had a way to measure
learning on a faster timescale, we could ask about the basal
ganglia involvement on a faster timescale. I've showed you we can
measure learning across days. If we had a way to similarly
measure learning across trials within a day, then we could look
at faster cognitive mechanisms. So now instead of looking at
the probability of reaching, we're going to look at the
animal's reaction time, which is the time delay
to the first reach. And we're going to
compare the reaction time early in the day to the
reaction time 100 trials later. If the animal is improving,
then his reaction time is speeding up. We're going to plot that
improvement, that speed up, on the y-axis. On the x I'm going to
show the change we expect if the animal is simply
modifying the rate of reaching before that cue. So this is reaches
before the cue, so it has nothing to
do with cued reaching. And so we have a non-cued
component and a cued component. We find that mice learn
within a day, shown here. They increase cued reaching
and decrease non-cued reaching. Does striatum store this
accumulated learning within a day? If it does, then when
we shut off the striatum at the end of the day-- which we can do with
optogenetics very precisely, wait and then shut
off striatum-- we expect that the
animal's behavior will fall back to where it was
at the beginning of the day. But if striatum does not
store this learned accumulated performance change,
then we expect that there will be little
change in the cued reaching. And this is what we see. So it seems the striatum
does not store the within day improvement. But if we disengage
striatum early in the day and turn off the
striatum on all trials, we see that the animals
never seem to build up or accumulate that
improved performance. And we can see
incremental updates. When the mouse touches
the pellet, he improves. But this incremental
improvement is reduced when we shut off the striatum. So we favor the hypothesis
that the striatum acts on a very fast
timescale to update behavior. And so in conclusion,
what I've showed you today is that activating a specific
set of neurons in the brain is sufficient to teach
mice a cued response. The striatum isn't
needed after learning, but it is needed
during learning. And we think it's
needed specifically to provide these fast
incremental updates. And so a picture emerges. This is just a
cartoon, but we can imagine that the striatum is
the arrow pushing the animal through behavior space. And that in healthy learning,
these incremental updates add up to bring the animal
to some optimal place. Maybe addiction is
overcharged learning. Maybe there's a deficit in
Parkinson's in that updating. Maybe Tourette's
or OCD or wandering to the wrong part
of behavior space, or the brain getting
stuck in a local minimum. We think that
perhaps the deficits that we see when we
shut off striatum, while small on a
fast time scale, could really add
up over a long time to produce the really
serious problems that we observe in basal ganglia
dysfunction and disease. So I'd like to thank the
people who contributed. Of course, all of the people
who developed optogenetics, thank you. Bernardo Sabatini, my
advisor, and Marci Iadarola, an excellent technician
who's been working with me on this project. Thank you. [APPLAUSE] Thank you so much, Kim. So our next speaker is Dr.
Edward Boyden from MIT. Ed is the Eva Tan professor
of neurotechnology at MIT, and also a professor
at the Media Labs, as well as an investigator
of Howard Hughes Medical Institute. Ed did his undergraduate
work at MIT, where apparently he
studied quantum computing. That's what I
learned from his CV. I was surprised about that. And then went to Stanford to
do his PhD in neurobiology, working with Jennifer
Raymond and Richard Chen, who is the
brother of Roger Chen that we heard about earlier,
somewhat ironically. While he was at Stanford he
worked with Karl Deisseroth and Feng Zhang to first
put channelrhodopsin into neurons and show,
as we learned earlier, that one could control the
activity of mammalian neurons with this tool. Afterwards he came to MIT. And I think his
lab is really one of the broadest and
yet consistently creative, in many
areas, laboratories that I've ever seen. His group has worked, of course,
on finding new and engineering new optogenetic
actuators, and we've heard a little bit about that. His group has also
developed robotic systems for doing electrophysiological
analysis in the brain. He's worked on
new amplifiers for electrophysiological analysis. And most recently,
he's developed what almost seems like
a comical approach, but is really incredible,
which is that in order to look at tissue at
higher resolution, he decided to not
improve the microscope, but instead make
the tissue bigger. And thus invented the field
of expansion microscopy, which is really providing some
remarkable insight into how complex tissues are organized. Ed, welcome. [APPLAUSE] Great. Well, first I'd like
to express my gratitude to be here receiving this
award with my good friends, collaborators, and colleagues. It's a tremendous honor. And I'm excited to get to
talk today about optogenetics, but also how
different technologies might fit together in this grand
quest to understand the brain. The brain is so
complicated that I think we need to think about
an integrated toolset that lets us make maps of the brain,
that lets us control the brain. And optogenetics, of course,
is one of those key techniques. And then what you might call
the opposite of optogenetics-- to watch the brain in action. And interestingly,
tools in that nature are emerging from the
optogenetic toolset itself. Why is it so hard? Well, the brain, amongst
biological systems, exhibits extraordinary
spatial complexity and temporal complexity. So if you think about
it, you know, brain cells are enormous, right? They're centimeters
in spatial extent. We have neurons
a meter in length going down our spinal cord. And yet, if you care
about the building blocks of neural
computation, you care about axons and dendrites,
synaptic connections, and biomolecules
within brain cells. So how do you see
across all those scales and control across all those
scales of spatial extent? And there's also time. So of course, if you care
about learning or memory or Alzheimer's disease
or development or aging, these are your long
term processes. They take hours
to days to years. And the quantal building
blocks of brain computations, though, are these millisecond
time scale electrical pulses. So if we're thinking out ways
to build tools to address Bring questions, you really have
to think fundamentally about space and time. And so today, I'll tell you
about sort of our thinking about these properties
that yielded principles of how to discover and engineer
tools focusing on optogenetics. But towards the end,
I hope to also talk a little bit about how we're
trying to develop strategies for imaging neural activity. And of course, if you can see
and control neural activity, that's great. But it would be nice to
have a map of the brain to know where in the
brain to look or perturb neural activity. And my hope is that if we can
stitch together these tool sets, comprehensive,
emergent pictures of how brain circuits work might
become more and more feasible. So I'll start with optogenetics. So you've already heard
through the first couple talks the introduction of the idea. I first met Karl when
I got to Stanford, and we started brainstorming
about how would you control neural dynamics
and sort of started going to the laws of physics. Could you use magnetic fields? Could you use light? And light, of course,
would be great, as Francis Crick
independently also outlined, because it's as fast
as anything ever gets, and you can aim it at things. You have to bring
light into the brain. And as much as people
have brought electrodes into the brain for
over a century, you can bring in optical
fibers or other kinds of optical devices. This next question
becomes, do you make a molecule that converts
light to electricity, or do you find it? And as Peter already
introduced, there's a family of microbial
opsins the study of which goes back many decades which
in single celled organisms will convert light into
electrical signals. So the first of these
to be characterized was actually a
light-driven proton pump shown here in structural form. It's a seven
transmembrane protein with an all trans
retinal chromophore that absorbs the light. And you get these rapid
conformational changes of what the discoverers
named bacteriorhodopsin like a different proton pump. And that's found in
halophilic archaea-- microbes that live in
really salty water. Now a decade later,
several groups found in the same species a
light-driven chloride pump which they named halorhodopsin. And it shares some
similarities but differs in certain specific
key residues that makes it a chloride pump rather
than a pump of positive charge. And then you've already heard
about Peter and his colleagues who discovered the
channelrhodopsin, these light-driven ion channels. Originally, the ones
they found were cationic, and now we also know there
are inhibitory ones that let negative charge through. So for me, one of the
key interesting papers that got me interested
in these opsins was from 1999 from
[INAUDIBLE] colleagues. And at the time, these molecules
have been characterized in these halophilic archaea. So they worked at very
high salt concentrations. Here's electric current
versus chloride. And you can see the peak is at
a very high level of chloride. And if your brain is like
mine, then the chloride is down around here. So this molecule
wouldn't work very well. This is a halorhodopsin
light-driven chloride pump from a specific species. But one of these molecules
had, for whatever bizarre evolutionary
reason, its peak function in the low
chloride regime. And so this is actually
one of the first molecules that Carl and I started
collecting from colleagues. The first one that we tried
out was, of course, the one that you've already
heard about from Peter that was discovered by him and
his colleagues-- the channel rhodopsin 2. And we put the gene into
neurons, shined brief pulses of blue light just from
a standard light source for seeing GFP at the time. And all of a sudden on the
first try, what we found was that you could
drive action potentials in cultured hippocampal neurons. And also, it didn't require
the all-trans retinel, the chemical
co-factor, to be added. For whatever strange
reason, mammalian neurons made the chemical co-factor. So a lot of what we've been
doing over the time since has been trying to
figure out, well, what are the principles
of finding these molecules and pushing their physical
properties to the limits of speed and
spectral sensitivity and all of the other parameters
that we would like to achieve? And since this is
a summary slide-- and I'll go through some of
the examples of molecules in the following slides. But what we found is
that members of all three of these classes-- the
light-driven proton pumps, the light-driven
chloride pumps, and light-driven
ion channels-- can be found that actually are safe
enough, effective enough, fast enough, and powerful enough that
they can work in neurons which are, of course, a bit of a
delicate environment with lots of complex physiology. So Brian [INAUDIBLE],, when
they were working with me, looked at light-driven
proton pumps and found that a member of this
family from the archaerhodopsin class, if you genetically
express it in neurons and then shine green
or yellow light, will powerfully pump
protons out of neurons and silence their
activity quite powerfully. Halorhodopsins do
inverse pumping but of a negative charge. So it has a similar
physiological effect although different
in a biophysical way. They pump chloride in response
to green or yellow orange light and hyper-polarize the
neuron, shutting it down. By deleting neural
activity, you can look at the necessity of a
set of neurons for a behavior or pathological state. And then the
channelrhodopsin 2 we put into neurons back in 2004. And you can shine blue
light on the neurons and let positive charge in. And you can activate
them, letting you investigate what
those neurons are sufficient to trigger. And I should mentioned
that Amy Truong, when she was a grad
student in our group, pushed halorhodopsins
out into some of their different
physical limits. And Nathan [INAUDIBLE],, when
he was in our group, tried to and succeeded the same
with channelrhodopsins. So light-driven proton pumps-- this is sort of surprising
to us that this worked. We don't think of protons
as very abundant in neurons or outside neurons. At neutral pH, they're
orders of magnitude less concentrated than
sodium and potassium and the other ions we often
think about in neurophysiology. So to our surprise, we found
this molecule, [INAUDIBLE] rhodopsin 3, when we
put it into neurons, allowed us to make
large photocurrents and to silence neural activity
even in awake behaving mice. So this was really to knowledge
the first near 100% nearly digital silencing
of neural activity in awake behaving animals. And we find these molecules
by either searching genomic databases or
by sometimes doing our own genomic investigations. And there are different
strategies that you can take. So one, of course, is if you
find a molecule that you like, you can search locally
in genomic space. Look at species
related to the species that you found the
molecule from and see if you can find improvements. So for example, there's an
archaeal species for which the archaerhodopsin 3 came. And [INAUDIBLE]
and Brian continued to search in species
closely related. And the original molecule arch
was powerful at silencing, but small [INAUDIBLE] ArchT
was even more powerful. You can also search
broadly in genomic space. So [INAUDIBLE]
Brown had discovered that a species of fungus,
actually, Leptosphaeria maculans, had a
light-driven proton pump. And we got the gene, which
we nicknamed Mac for short. And we found that it was
also able to silence neural activity. And because Mac
had a color shift-- this is the action spectrum, the
current on the y-axis and color on the x-axis-- you could express Mac in a
more redshifted opsin in two different neurons and then use
two different colors of light to differentially affect them. So a Mac expressing neuron would
be more silenced by blue light, and a molecule
expressed as a more red light sensitive molecule will
be more silenced by red light. That molecule that was
more silenced by red light was actually that very
same halorhodopsin that I mentioned
earlier, which had been found in by [INAUDIBLE]
and colleagues paper to be sensitive to light in
a realm of salt concentration that was much lower
than you might expect. And this molecule-- we published
the first proof of concept neural silencing back in 2007. But it was a fairly
weak molecule. The currents were
not as impressive as we might have hoped. So we started thinking about
the same genomic search properties and
thought, well, could we find molecules that are
light-driven chloride pumps that are much more powerful? And you could also
find molecules that have a red shifted
spectrum of activation. Now why would you want that? Well, when Amy [INAUDIBLE] was
a PhD student working with me, we started thinking about
the propagation of light in the brain. And of course, this is well
known by many investigators long before us. But if you put blue, green,
or yellow light into the brain there is absorption and
scattering of the light. But if you go to red
or even infrared light, there's less absorption. And that's one of the reasons
why blood looks red, right? It doesn't absorb
as much red light. And so the top are
models and the bottom are actual measurements
we made which suggested that redshifting
molecules could be quite powerful. So as often begins,
we started by looking through different genomes
for candidate genes and stumbled across a class
of molecules in this Crook's halorhodopsin class for the
technical term, which did seem to have a redshifted
spectrum with respect to the original. And then thanks to decades
of structural work, both through point mutogenesis
and crystallography, we made a couple point mutations
that increased the current. And what we found was that
if we expressed the gene for this molecule
that we called Jaws because it came from the
shark strain of [INAUDIBLE],, we could actually get
light from a red laser, shine that through
even an intact skull of an awake-behaving
mouse, and shut down neurons many millimeters
deep into the brain. Nathan [INAUDIBLE],, when he
was a PhD student in the group, tried to do something similar
but for the activators. And so he did a very
large scale screen. He computationally looked
at over 1,000 plant genomes in a project that's headed
by [INAUDIBLE] Wong called the 1,000 plant project
and identified over 60 new channelrhodopsins
and expressed all of them looking for a function. As you can see from the
red x's, a lot of them didn't work at all,
but some of them did. And these are his
screen currents in the red, the green, the blue. And he found
exactly one of these after this enormous
search that responded well to red light, which
he named crimson. And crimson allows you to drive
neural activity in response to red light. We made a point mutant crimson
R that has better kinetics. And you can even use light
getting into the infrared. Here's 735 centimeter
light driving a neuron to spike in a slice of
the mouse visual cortex. So of course, you can
use crimson and red light to activate large volumes
and deep into tissue, but it's also found use in
other areas that at the time, I didn't even
really think about. [INAUDIBLE] research
campus really wanted a better
optogenetic activator for drosophila-- fruit flies. And the problem
with fruit flies is if you use blue,
green, or yellow light, they have a startle response. They kind of flail their arms
and sort of freak out, I guess. But if you use
crimson and red light, then the effect is minimized. And he was then able to elicit,
through crimson and red light activation, behavior
in drosophila. And so now it's in very
widespread use in the fly community, for example. Nathan in also this screen found
molecules that were very fast. So one that he
named Cronos, which is a channelrhodopsin
with very fast kinetics-- and so it's found uses in parts
of neuroscience where kinetics is important, like in
the auditory system, or in the stimulation of axons
that have high firing rates. And interestingly, these two
molecules, crimson and Cronos, have also been very
valuable because they can be used together. So if you look at the
photocurrent in the y-axis and the color on the
x-axis, you can see that-- of course here is
crimson, where it has a peak out
here in the orange, and you can drive it in the red. Here's Cronos in
the blue circles and then the original
channelrhodopsin 2 in the black triangles. But they're all recruitable,
at least to some extent, by blue light. So we made the
observation that if we use dim blue light
to drive Cronos, so dim that it wouldn't
really recruit crimson, and we could then use bright
red light to drive crimson, we could differentially
control the spiking of independent populations. So groups have now
used this to look at multiple synaptic
inputs to the same cell or how a neuromodulatory pathway
might affect a given excitatory pathway in the brain because
it gives you differential control over these pathways. So a lot of the work
[INAUDIBLE] done has been in the
searching through genomes for interesting properties
like extreme shift of color or extreme kinetic performance. But what about getting
to the ultimate levels of spatial resolution? Could you get single cell,
single synaptic event, or single spike
control over circuitry? So we started collaborating
with an expert on holographic
neural stimulation-- Valentina Miliani at the
Institute of Vision in France. And this is work
that [INAUDIBLE] and [INAUDIBLE]
triply spearheaded. So she builds
microscopes that look like this where you have a laser
and you bounce the laser off of a spatial light modulator
and basically project a hologram into the brain--
a three dimensional sculpture of light, if you will. And so we decided,
well, what if we could try to build
opsins that were optimized for this purpose? Importantly, we
also want to make sure the opsins are located
just at the cell body and not on all the
axons and dendrites. This is an idea that
several groups [INAUDIBLE] Frank [INAUDIBLE] and
[INAUDIBLE] Bolton had tried taking the
original channelrhodopsin to and fusing peptides to
them to get [INAUDIBLE] just at the cell body. Now, why is that? Well, even if you
holographically drive a cell, you're going to hit the axons
and dendrites of neighbors that come by. And by fusing a
peptide to it, you can localize it
to the cell body. So we thought, well, we
have all these new opsins. What if we do a double screen
and look for new opsins that enable very powerful control
when you activate just a cell body and also peptides
that will enable them to be targeted there? It turns out that
one of the molecules that Nathan had found
in his screen, CRCHR-- which, if we had known
it was going to be cool, we would have come up
with a better name-- is a very powerful
molecule, about an order of magnitude higher occurrence
than the molecules [INAUDIBLE] channelrhodopsin 2. And so we thought
if we located just the cell body, that might help
make up for the lack of current because you're depriving the
axons and dendrites of all those currents that would
normally come to there. And then [INAUDIBLE]
found a peptide that when infused to
COCHR targets it just to the cell body for expression. Here's a sea of green
in this slice of cortex. And here, you can see cells
that are spaced with darker intervals between them. So why is that helpful? Well, if you record
a cell and then scan your holographic laser about,
when the opsins are everywhere, about a third of the
time, Valentina's team observed straight activation. But if you located the molecules
just at the cell body, then the effect went down
essentially to zero. So in summary, a
lot of this quest has been an [INAUDIBLE]
of luck, right? The molecules essentially
out of the box had speeds and
amplitudes and profiles that made them appropriate for
controlling neural activity. And in recent
years, we've really tried to push the tool box
out to their physical limits of performance-- maximizing
amplitude, accelerating speed, shifting colors, and
improving spatial precision. But it's interesting
to think about, well, what about the opposite? Can we learn from
this experience and do anything for the
opposite goal of imaging neural activity? Can we get neurons to light
up when they are active? So in this case, of
course, the natural world has not made us so lucky. There is no molecule
that all by itself will convert neural
activity into light with the right speed and
safety profile and efficacy. You know, what we got
lucky with in optogenetics did not translate to the
inverse problem of imaging. So naturally we
started thinking, well, if the natural world
won't evolve these things, why don't we build
a robot that will do the evolution
in the laboratory? And so when [INAUDIBLE]
and Erica Yung were postdocs in my
group, we decided to try to build basically
a robotic scientist. Why can't we build a robot
that would kind of do what we do when we're
screening for optogenetic tools but in an automated way? So how do you do that? Well, suppose you
have a bunch of genes. They can be obtained
from the wild, or they could be
mutants of an old gene that you want to evolve
in some direction. Some of the mutants might be
more interesting for your goal, and some might be less. And then we
transfect these genes into cultured mammalian
cells that each cell gets one copy, a different mutant. And then you can use
an automated microscope to scan around and look for
cells and therefore molecules that have things of the right
speed and safety profile and efficacy and all
the things that we want for indicators that, for
optogenetics, the natural world provided. Then we bring in a robotic arm
developed by our collaborator, [INAUDIBLE]. And we can then pull out
the cells and therefore the genes that are interesting. Now it turns out that Adam
Cohen's group here at Harvard had made the serendipitous
discovery that the molecule archaerhodopsin 3 that I'd
mentioned earlier that we had discovered was a very
powerful neural silencer was actually a weakly
fluorescent voltage indicator. And his team went
on to make a mutant called quasar 2 that was
brighter than archaerhodopsin 3 but still quite dim and not very
well localized to the membrane. And nevertheless, it's been
useful in cultured neurons for imaging voltage. So we thought, why don't we
try to do a very large scale-- and this might be one of
the largest direct evolution screens ever done in
mammalian cells, anyway. We did almost 10 million mutants
in two rounds of evolution. And let's screen for
multiple parameters. We want the molecule
to be bright and well-localized and
safe and photo-stable. Why screen for
multiple parameters? Well, if you're trying
to take a molecule and mutate it and then screen
for better mutants, if you push it in one direction-- say,
screening for brightness-- you might devolve it away
from the other properties that you seek. Evolution doesn't
pull any punches. It's just trying to
get the job done. And so here, you
can see brightness on the y-axis and
localization to the membrane, which is sort of a proxy of
safety as well as function. And each circle is
a different cell containing a different mutant. And indeed, you can
see cells in there from molecules that
are very well-localized but, you know what,
not that bright. And then there are molecules
that are much brighter, but, hey, they're no better
localized than the parent. So with this, we did a couple
rounds of directed evolution and found a molecule that
we named archon, which is well-localized to the membrane. In the lower left,
it has good kinetics, which it inherited from
the parent quasar 2. And on the right, has good
changes in fluorescence and signal to noise. And so we gave it to groups
like Bernardo [INAUDIBLE] group, who did some measurements of
synaptic events in brain slice. They would stimulate in one
layer of the cortex and image synaptic events in a
different layer of the cortex and just focus on
the lower left. And black is what you see when
you record with a [INAUDIBLE]---- sort of a ground
truth, if you will. And the magenta are
the unaveraged traces imaged on a microscope. So it turns out that this is
a red fluorescent molecule. You can shine 630
nanometer light, which is sort of the color
of a laser pointer, and it'll emit redder
light around 660 nanometers and longer. More recently, [INAUDIBLE]
and Seth [INAUDIBLE] from Michelle Hahn's
group have been expressing this in
awake behaving mammals and being able to
image normal activity in multiple regions
of the brain-- motor cortex, visual cortex, stratum. These traces look like
[INAUDIBLE] traces, but they're being
imaged on a microscope. In this case, you
can see in upper left an epifluorescent or
one photon microscope. And you can even see a
population of neurons in an awake behaving animal
here in the mouse hippocampus and look at the dynamics of
these cells in a local network. Importantly, since this molecule
is a red fluorescent molecule, you can use it in conjunction
with blue light optogenetics and drive neurons
while you image them. So you can imagine [INAUDIBLE]
perturb in a closed loop way neural activity while looking
at the voltage of the cells as well. So in summary, we want to do for
imaging what the natural world had done for optogenetics. And turns out that
an optogenetic tool can be mutated to
become a pretty useful fluorescent indicator
of neural activity-- in this case, the
voltage of the membrane. But ideally, you'd be
able to image a circuit and perturb it with also
some knowledge of how these cells are connected
to other cells-- upstream cells that bring in
inputs, downstream cells that have outputs. How do you know what
the network looks like? And so the last
couple minutes, I want to talk about some
newer work we've been doing, which is, I think, going to
be very helpful in building a pipeline for generating
new hypotheses to be tested with optogenetics. And this is a way
we developed to make maps of brain circuitry. Now why is this hard? Well, many people are
using electron microscopy to make maps of brain
circuitry, some here at Harvard who have pioneered the
field of kinetomics, looking at large scale electron
microscopy maps of the brain. But it's very hard to
see molecular information with electron microscopy. There's also super
resolution microscopy. Store microscopy was
invented here at Harvard. But it's difficult
to scale this up to large 3D structures because
of the physical properties of super resolution microscopy. So starting with two then
grad students, Fay Chen and Paul Tilburg-- and now that half our
group works on this, we decided, well, what
if instead zooming in on the brain, we could
physically blow it up? What if you could install
a dense spider web-like mash of swallowable material
like the stuff in baby diapers around and between all
the biomolecules of a cell, soften the specimen by treating
it with chemicals, add water, and could you blow up the
brain and make it bigger? And so this owes a debt to a
bunch of old lines of research. Actually, my MIT
colleague Toshi Tanaka, who unfortunately passed away
relatively young from a heart attack-- but in
the early 1980s, he was studying the physics
of these highly swallowed polymers. So in this cartoon, you
see the white polymer mesh. You add water, and it's
drawn in through osmosis. The polymer swells. And it's a highly charged
polymer, importantly. So the physical growth can be
enormous in a very short amount of time. And he published this
beautiful paper studying the sort of phase transition
[INAUDIBLE] physics as the polymer
increases by 1,000 fold in volume in a
matter of minutes. You also have to
get the polymer in, and there's also a
long history to that. People like Peter [INAUDIBLE]
were using uncharged hydrogels like polyacrylimide and taking
specimens and embedding them in these polyacrylamide
hydrogels to improve their imaging. So if you could synthesize
this dense spiderweb-like mesh but make it a charged polymer,
one could try to take a brain cell like the one on the left
and pull the building blocks of life apart from each other
to make something more like the one on the right-- the constellation
of biomolecules hovering in space, but with
their relative organization preserved. So how do we do it? Well, we had to invent
a couple of chemistries. In this cartoon, the
proteins are shown in brown. And we had to invent handles
that would bind to DNA, RNA, proteins-- and now
we're working, even, on sugars and lipids-- and put little anchors or
handles on all of them, so we can apply force to
them and pull them apart. Then we have to
make the polymer. And so we use free
radical polymerization to synthesize the
polymer hydrogel mesh, except we use these charged
monomers, sodium acrylate, to form a poly acrylate mesh. And the spacing between
the polymer chains is very tiny, around the
size of a biomolecule or so. And when these chains encounter
the handles or anchors, they form a bond. Finally, we soften the tissue
by adding in detergents or heat or even enzymes
to chop things up. And then we add water. The polymers will swell-- as
Tanaka had beautifully worked out the physics long ago-- but this time, the biomolecules
will come along for the ride. So we published the
initial discovery that we could evenly expand the
biological system back in 2015. And Panel B is a
piece of mouse brain. The polymer is very, very dense. So spacing is, again,
at biomolecular scale. After the process,
this piece of tissue grows until it's like
the one on the right-- about 100 times
bigger in volume. Now, by design, we made the
mesh so dense and so evenly synthesized that we wanted it
to be an even expansion process. But this is biology. It's not enough
just to design it. You have to prove it. And so we and many
others have been doing very detailed
control experiments where we take a pre-expansion
image with the classical method of nano-imaging like Storm. And then we take a
pulse expansion image after we blow up the
specimen and compare them. And the distortion is not
zero, but it's really small-- maybe around a couple
percent over length scales of tens to hundreds of microns. So here on the left, you can
see a piece of the mouse brain-- the cortex and hippocampus. This is a Thy1-YFP mouse that
Guoping Feng, and Josh Sanes, and colleagues made
many years ago. And we're going to zoom
in from top to bottom. That white square, we blow up. And you can see two cell
bodies and some purplish dots that are synapses that
we antibody stained. And we're going
to zoom in again. And the purplish
dots get blurry, because you've hit
the resolution limit of our confocal microscope. That's all before expansion,
but after we expand, you can now see cleanly the
pre- and post-synaptic sides of these neural connections. Blue is an antibody against the
pre-synaptic protein bassoon. Magenta is
representing the image of-- taken with an
antibody against Homer1a, a post-synaptic protein. And the distance between
these two protein densities is the same that Catherine
Dulac and Xiaowei Zhuang measured many years
ago with STORM microscopy. Except now you can use hardware
that already most groups already have. [INAUDIBLE] Group,
we worked with to try to apply some
light sheet microscopes that their group had invented
to expanded brain tissues. And the effect is that we
now have a several order of magnitude speed up over
equivalent resolution competing technologies. It's just a matter
of engineering to make the
microscopes go faster. And so this was work
that [INAUDIBLE],, and [INAUDIBLE] triply
spearheaded across our group and Eric Betzig's group. Imaging mitochondria and
lysosomes at the top. And myelin at the bottom. We can look at synapses,
and dendritic architectures, and exome architectures
across the thickness of the [INAUDIBLE] cortex. And our hope is that we might
be able to have a 50,000-fold speed up just by further
engineering, hopefully, not too many months from now. So the beauty here is
that you can really image at scale across
extended neural circuits, but without losing sight of
the nano-architecture of what's in a brain. So here is the same
color code as before. We have synaptic proteins
in blue and magenta. And we now have YFP in yellow. And now we're kind
of at a meter scale, but we can zoom in and get very
close to individual synapses. And this is kind
of a long movie. And in the interest of time,
I'm going to skip to the part here where we're going to start
sort of zooming out and seeing more of the context. And you can then again
zoom in and see the detail. This is a movie that we made
of an entire fruit fly brain where the dopaminergic
neurons are expressing a [INAUDIBLE] protein. And I just like it because it
feels like a roller coaster as you fly through it. We're going to go right
through the ellipsoidg body right there. And now we're going to go out
into the more lateral sides of the fruit fly brain. And I hope you can see that
we can see individual axons and dendrites, but we
can also zoom out and see the entire brain as well. So why is it helpful? Well, you can really start to
look at the wires of the brain. Here's another Harvard
Technology Brainbow from Jeff Whitman,
and Ross Haynes, and others where you
express fluorophores in combinations in brain cells. So this blue cell
got one fluorophore delivered by a virus. This green cell got
a different one. This aqua one might have
gotten one copy of each. And if you zoom in on two
axons-- we're here in the mouse hippocampus-- you quickly
hit the resolution limit of the microscope. And it's hard to
see these axons. It's a blur, right? What's this green banana shape? But after you expand,
you can cleanly resolve the individual
axons of this bundle. And so we and others
are now trying to design machine-learning
techniques to automatically trace neural
circuits that are color coded with a strategy which
they call a Brainbow and also to use expansion to
give the resolution at scale. So to summarize, we discovered
that you can physically magnify biological systems. And this technique has really
started to become popular with people making discoveries
published every week in a wide variety of species. Not just brain cells-- Giardia parasites. In the lower left, E. coli. In the upper right, planaria
and kidney specimens. And the list goes
on, and on, and on. So our last slide is-- what
I really would like to see is if we can build
these into a pipeline. Suppose with
expansion mapping, you can make comprehensive
maps of brain circuits. And then you can go in and
observe the neural activity using fluorescent indicators
of voltage and other signals that neurons perform. And then go in with
optogenetics and use that to do a causal test
of what a pattern means. Can we assemble this into a
pipeline that could, who knows, maybe even yield
computational models of how neural circuits
work or how they go wrong in dysfunction? So I think of the knowledge
along the way, all the people who have led specific
projects, but I'll put up this slide, which I
don't have time to go through. I will just acknowledge
those people within the group at
the top in our alumni who helped with these projects
and an even longer list of people in the
middle who collaborated with us to make this a reality. It's really an omnidisciplinary
arena, neuroscience, nowadays. So I hope you can use these
techniques in your group. We have a big
culture of teaching. And feel free to email me
if you have any questions. [APPLAUSE] Thank you very much, Ed. So now we have about
a 20-minute break. And we'll come back
with the second session. [UNINTELLIGIBLE CHATTER] [SIDE CONVERSATION] OK, if everybody can
take their seats, please, we're ready to start again. Welcome back for the second half
of our Alpert Prize Symposium. We're going to follow
the same format with the two awardees
bracketing talk by opposed talk. So our first talk is
by Gero Miesenbock. Gero's the Waynflete
Professor of Physiology at the University of
Oxford and the director of the Center for Neural
Circuits and Behavior. He's from Austria. And he did his medical
degree at Innsbruck in which he studied really
classical physiology and then took a remarkable
reductionist turn and came to the United
States to work at Yale with Jim Rothman on the
mechanisms of vesicle secretory pathways in cells. And what I like
very much about Gero is that when he faces
a problem in biology-- and I think he's
very much motivated by biological problems. When he faces a
problem in biology, he creates a tool to solve it. So when he was in Jim's lab, he
invented what I think is really the first GFP-based sensor
of a cellular process-- or at least one of the first-- which was the
SynaptopHluorin, in which he exploited the pH sensitivity
and modified the pH sensitivity of GFP in order to create a
protein whose fluorescence changed from the
acidic environment to the extracellular environment
in the secretory chain. This has been very important as
a tool that's still used today to monitor, among other things,
the release of neurotransmitter from neurons. When he set up his
own laboratory, he chose something in
between the secretory pathway and animal
physiology-- or rather, mammalian physiology--
and studied the Drosophila nervous system and
tried to understand how the brain of that smaller
animal controls its behavior. Again, faced with problems,
he invented several forms of optogenetics. One was to reconstitute,
as we heard before, the entire
visual transduction pathway of the fly
in neurons to render those cells sensitive to light. It was a great demonstration. I don't think you really used
it for biological discovery. So then he went on
and invented yet a second approach, which was
to exploit ion channels that were not present in the fruit
fly for which he could design light-activated ligands. And he's used that
to a great extent to make fundamental discoveries
about the relationship between activity and
behavior in the fruit fly. And I have great
admiration for his work. I think he's going
to tell us today about some biology including
some wonderful work on the basis of sleep
drive in fruit flies. Thank you. [APPLAUSE] Thank you very much,
Bernardo, for this very, very kind introduction. It's obviously an enormous honor
and a huge pleasure to be here. In fact, the honor
and the pleasure is so large that I
decided to share them with my doppelganger. This is Dr. Gero. [LAUGHTER] We have more in common
than just a first name. He is also a scientist. And a mad one in this Japanese
comic called Dragon Balls. He strives for world
domination just like I do. And if you look
carefully, you can see that his skull has been
replaced with a transparent Plexiglas dome, of course,
so that the function of specific genetically-targeted
neural circuits in his brain can be controlled with light. And that's what
today is all about. Now, what motivated the
invention of optogenetics some 20 years ago was the idea
that a technology like this would open three experimental
doors for neuroscience that had previously been locked. The first of these
doors was the ability to pinpoint the neuronal
causes of behavior with much greater
precision than what had been practical previously. And this idea, really, reflects
my scientific upbringing as Jim Rothman's
postdoc where the mantra that I was exposed
to on a daily basis was reconstitution,
reconstitution, reconstitution. In other words, if
you are a biochemist, you want to understand how
a biochemical process works. What you do is you purify
the responsible actor as you put them back together. And you reconstitute the
biologically processed from these pure components. So when I started my
own lab, I thought, what would be the equivalent
of a biochemistry constitution for a neuroscientist? And, of course, that equivalent
is to metaphorically purify the electrical activity patterns
that underpin our mental lives, play them back into
a nervous system. And if you, in this way,
can reconstitute perception, action, emotion, thought,
then you have a credible claim that you really understand
how these mental events are actually based in the physics
of the nervous system. The second
experimental toy that I thought optogenetics
would unlock was the probing for
neuronal connections, which is classically done
in a painstaking way in paired electrode searches. And in more modern approaches,
equally painstakingly, through large scale
reconstructions of neural circuits. One alternative
approach, of course, would be to replace one of
the stimulating electrodes with a light beam that can
be rastered across tissue. And then just listening
with one electrode whenever the light beam
hits a connected partner, and in this way, unravels
synaptic connectivity. And the third experimental
door, of course, is the test of
mechanistic ideas. If you have a conjecture
about how a system works, then, of course, the
only way to figure out whether that conjecture
is right or wrong is to interfere in a targeted
fashion in the process. So for much of the
rest of my talk today, I will relate
some of our recent work on a biological problem in
which optogenetics has indeed unlocked all three of these
experimental doors for us. And that problem is
the biological function and neuronal control of sleep. Sleep is one of the great
biological mysteries. Each night, we disconnect
ourselves from the world for seven or eight hours-- a state that leaves us
vulnerable and unproductive. And yet, despite
these risks and costs, we still have no clue as
to what sleep is good for. We are trying to get at the
biological role of sleep by understanding its neuronal
regulation based on the premise that somehow the brain's
sleep control systems must respond to molecular
changes that are intimately linked to the core
function of sleep. It's widely thought that
there are two of these control systems in our brains
that are symbolized on this classical diagram by two
different forms of oscillation. The sine wave represents the
well-understood circadian clock which oscillates
in synchrony with predictable external
changes that are caused by Earth's rotation. As such, it's a purely
adaptive mechanism that makes sure we
do our sleeping when it suits our lifestyles best. But understanding the
clock is unlikely to speak to the deeper mystery
of why we need to sleep in the first place. The solution to that
mystery, we believe, will come from understanding
the second control system-- the sawtooth oscillation that's
superimposed on the circadian clock. And that sawtooth oscillation
represents the sleep homeostat. The homeostat measures
something that happens in our brains or our
bodies while we're awake. That something accumulates
or depletes-- logically, it's the same-- during waking. And when a certain threshold
is reached, we go to sleep. The process resets itself
while we're asleep, and then the cycle
begins anew when we wake up on the next morning. We know a lot about
the circadian clock. And this is really
the Rosetta Stone that broke that problem open,
the discovery by Seymour Benzer and his grad student Ron
Konopka almost 50 years ago of fruit flies whose
circadian clocks ran abnormally fast or slow. And from that discovery,
then followed, through the work of
many laboratories over the past five decades,
a pretty complete molecular, cellular, and
systems understanding of circadian timekeeping. This slide, in contrast,
summarizes pretty much everything we know about-- [LAUGHTER] --the molecular basis
of sleep homeostasis. And it's an
overstatement, but maybe not too gross an overstatement. And my goal for the rest of the
next 20 minutes or so will be to draw at least a few outlines
on that blank canvas. $$$ Conceptually, we know how
the homeostats must operate. It's a relaxation oscillator,
a bi-stable system that switches between a
fill and discharge mode where waking corresponds to
fill mode where something called sleep pressure builds. Until a tipping
point is reached, the system flips
into discharge mode. And then the accumulated
sleep pressure is dissipated. Now, at the end
of my talk, I hope to propose a molecular
interpretation of what sleep pressure is, when the
brain it accumulates, and what the process are that
underlie this bi-stability, this switching between a
fill and a discharge mode. The story begins, like Seymour
Benzer's and Ron Konopka's, in fruit flies with the
discovery by a former postdoc in the lab, Jeff Donnelly,
when he was actually a graduate student with
Paul Shaw at Washington University in St. Louis of
neurons in the brains of fruit flies that exert a powerful
influence over sleep and waking. There's about two
dozen of these neurons. They're labeled here by a
promote enhance element called R23E10. So whenever you see
that string of symbols, you know that a
genetic manipulation is targeted selectively
to these two dozen or so out of the
100,000 cells that make up the fruit fly's brain. The neurons project
to this inverted V structure in the midline. This is one particular
layer of the fan-shaped body of the central complex. Why such a small
number of neurons can exert such a powerful
influence over the probably most dramatic global
states transitions we undergo on a daily
basis is another mystery, but a topic for
a different talk. Now, together, Jeff
and I, discovered that these neurons represents
the output arm of the sleep homeostat. The neurons themselves,
I should say, they've originally identified
in an activation screen where the brains of flies
were randomly peppered with actuated molecules. So this is one example
of an optogenetic genetic or, in his case, thermal
genetic application where the neuronal
substrates of behavior can be pinpointed in an
almost classical forward genetic screen. So the way we do
typically our experiments is that we
[INAUDIBLE] fix a fly. We let it walk on a spherical
treadmill, a little Styrofoam ball, whose rotations we read
out with an optical computer mouse. And since there are
no documented cases of somnambulism
in flies, we know that whenever the
ball is spinning, the fly must be awake. What you can't see is that the
head capsule is actually open. And we've inserted
a patch electrode into one of these 24
sleep control cells and expressed an
optogenetic actuator in the entire
population of neurons. So we can control the
electrical activity of these neurons
optically, and at the same, have one recording electrode
as a measure of one member of that population. And this is now an experiment
lasting for half an hour. You'll see that the
fly starts out awake. It's moving along happily. The ball is spinning. These are the rates tick marks. And the sleep control
neuron is completely silent. At about three or four minutes,
we switch on the lights. You can see that the neuron
whose activity we are recording begins to emit
electrical impulses. And all movement virtually
instantaneously stops. At about 19 minutes or so, we
switched the lights off again. The sleep control
neuron falls silent. And movement quickly resumes. So we have isolated
a switch in the brain of the animal that allows us to
toggle it into and out of sleep on command. Now, during many
of such recordings, we found that when
we targeted one of these sleep-inducing cells
with our patch electrodes, these neurons were typically
found in one of two states. In one state, shown here on the
left, where the neuron behaves like you would expect a
well-behaved neuron to act, you see that it
responds to injections of depolarizing currents
with action potentials whose frequency grows in a graded
fashion with the amplitude of the injected current. The neuron on the
right, in contrast, does not initially looked
like a neuron at all. You can see we can still
depolarized this cell to positive membrane voltages
and still not squeeze a single electrical
impulse out of that neuron. It's not just the active
membrane properties still that have changed. It's also the passive
membrane properties. If you compare the size
of the voltage steps that are elicited by standard-sized
current injections, you can see that the voltage
deflections on the left are very large, suggesting
that this neuron opposes the injected current
with a large resistance, whereas the voltage
deflections on the right are much, much smaller. Also, the neuron
takes much less time on the right to settle into
a new equilibrium membrane potential after a current
step, whereas on the left, it takes quite much longer. So this combination of a
short membrane time constant and the low-input
resistance on the right is almost diagnostic of the
opening of a current leak, or a current shunt. And I'll show you
in a few minutes what the molecular basis
of that current shunt is. Now, when we saw this, we, of
course, immediately thought, well, maybe this
is the mechanism of homeostatic sleep control. Maybe these neurons
naturally switch between electrically
active and silent state, depending on whether the
fly is asleep or awake. And our sampling of
flies, whose sleep histories have been manipulated,
confirmed this prediction anecdotally. But of course, in
order to really nail this point, to
demonstrate that a neuron is capable of transitioning
between these two states as a function of
its sleep history, one would like to
be able to control that translation directly. And in order to
do that, one would need to know a signal that
normally acts on these neurons, and actuates the switch. Now what might such a signal be? Well, a clue to the
identity of that signal had come in the
first experiments in which the
behavior of an animal was controlled optogenetically. These experiments were done
by my then-graduate student, Susana Lima, at Yale in 2004. And what Susana had done
what she had expressed light-gated ion channels
in all dopaminergic neurons in the brains of flies and
then recorded their movement trajectories for two minutes
before and after switching on dopaminergic activity
in these animals. Here, you see examples of
these movement trajectories in a circular arena
of four animals before and after activation
of the dopaminergic system, and you clearly
see that dopamine has a highly arousing
effect on flies, as it, of course,
does on mammals. Most psychostimulants-- cocaine,
amphetamine-- of course, acts by inhibiting reuptake
of dopamine at the synapse, and thereby elevates
synaptic dopamine levels. So one potential
signal that should act on these
sleep-inducing cells is an arousal-inducing
dopaminergic projection. And there is, in fact, a
class of dopaminergic neurons that extends their processes
exactly into the same brain region that's also inhabitants
by the sleep-inducing neurons. In fact, the two neurons
shadow each other so closely that the
question naturally arises whether they are
synaptically connected. And optogenetic
gives us the tool to probe for these connections
by recording, with a patch electrode from one of the
sleep-control neurons, while manipulating
the activity of the putative
presynaptic partner, the dopaminergic neurons. So this is now
such an experiment where we start with our
sleep-inducing neurons in the electrically
active on state, and then we switch on the
dopaminergic projections that innervate these neurons
optogenetically. We can even predict
what the effect of such an arousing dopaminergic
signal on these neurons should be. It should, of course, silence
them and switch them off. And this should be the mechanism
that underlies awakening. And this is exactly
what happens. You can see that after
dopamine delivery, the neuron falls silent, the
action potentials disappear. Also, the passive membrane
properties change, the input resistance
drops, the membrane time constant shortens,
and importantly, if we hold the
recording long enough, which we can do
in some cases, we can see that these
changes are completely reversible after an
extended time frame. So this suspension of electrical
activity is temporary. It's part of the normal duty
cycle of the neurons' activity, and not an artifact
that's brought on by our experimental
manipulations. If we use the passive membrane
properties' input resistance and time constant as a
measure of the kinetics of these changes, we can see
that the switch happens rapidly with the time constant of
about one minute, which is, of course, way too
fast to be accounted for by the production
of new ion channels, but must instead,
involve the modulation of the existing channel
repertoire of these neurons. And we can also demonstrate
that the action of dopamine on the sleep-control
neurons is direct because we have discovered
the dopamine receptor in these neurons that
mediates the effect. And if we remove that
receptor selectively from these neurons
using RNEI, the neurons become resistant to the
dopaminergic signal, the flies become unable to wake up. And literally, doze
away their existences, spending 23 and 1/2
hours a day asleep. Now, the ability to control
this excitability switch of the sleep-control
neurons also gave us the means to dissect
the underlying biophysical mechanism. In the interest of time, I will
only summarize the results. What we discovered is that there
are two potassium channels that get modulated antagonistically
between the electrically active on state, which
corresponds to the sleep state, and the electrically
silent off state, which corresponds to the awake state. There's the classical
voltage-gated Kv1-channel shaker, which gets upregulated
during electrical activity and a [INAUDIBLE]
potassium channel that we've discovered
and termed "Sandman" that gets translocated
into the membrane of sleep-inducing neurons
when dopamine switches the electrical activity off. And it's the potassium current
through this leak channel that underpins the
shunt that you've seen in the electrophysiology. So that's responsible for the
short-membrane time constant and the low-input
resistance of these neurons. Now, knowing the
biophysical basis of this transition
between sleep and waking, then allows us to reframe the
relatively wake biologically question, what is the
biological purpose of sleep, into a mechanistically
well-defined problem. We can ask, what
signal or process switches the sleep-inducing
dorsal fan-shaped body neurons on. And in fact, we can
make our question even more mechanistically
precise because we know the crucial role that is
played by these two potassium channels. Any sleep-inducing signal
that's sensed by these neurons must ultimately act by
upregulating the shaker current and by driving the
internalization of the Sandman channel that acts as a deterrent
on the electrical output of these neurons. I'll focus, for
the rest of today, on our understanding of
the regulation of shaker, of which there has been
more progress recently than in the Sandman control. Now shaker, like many
voltage-gated potassium channels, is a
beautiful structure composed of two different
types of subunits. There is a pore-performing alpha
subunit shown here in gray, to which is appended
on the cytoplasmic side a beta subunit
shown here in blue. Now if you zoom in closer
on the beta subunit, you see that it actually
has a small molecule cofactor bound shown
here in red, which is the nicotinamide NADPH. This structure solved
by Rod MacKinnon that revealed this enzymatic
nature was not unexpected because when the first of these
potassium-channel beta subunits were cloned some 25 years
ago, the sequences suggested that they are actually enzymes,
specifically oxidal reductases. And that then
raised the question, are these molecules
voltage-controlled enzymes, or are they redox-controlled
ion channels? I will present you evidence that
they are certainly the latter and that their ability
to sense changes in cellular-redox chemistry
is an integral component of the regulation of sleep,
and perhaps even causally tied to the biological
function of sleep. I will also argue
that it may actually be the interplay between
the pore of the channel and the active
site of the enzyme that's the fundamental
accounting principle that underlies sleep homeostasis. It was also noted
quite early on that even if these molecules,
these potassium-channel beta subunits, clearly look
like aldo-keto reductases, they are terrible enzymes. They have very, very
low turnover numbers. And one of the
structural reasons is evident in this
structure here. If you look
carefully, you can see that the binding cleft,
in which the NADPH sits, is almost closed in
a latch-like fashion by a tryptophan residue that
locks the cofactor in place. And it's this obstacle
to cofactor exchange, that slows down the
turnover of the enzyme. We think that this is an
absolutely essential feature for the ability of these
neurons to monitor changes in sleep pressure. Now in fruit flies,
the Kv beta subunit is a protein called
"hyperkinetic," which Chiara Cirelli and
Giulio Tononi discovered more than 10 years ago. Causes insomnia, then mutates. It's just like mutations
in the alpha-subunit shaker also lead to insomniac flies. But here, we've reproduce
these experiments showing that homicide as
hyperkinetic mutant flies are indeed insomniacs,
and that we can rescue these insomnia
of the hyperkinetic mutants by restoring wild-type
Rod protein function just in these 24
sleep-regulatory neurons. So this points to these cells
as the sleep-relevant site the action of the protein. Now surprisingly, if you
use a rescue construct that carries a
single-point mutation that allows normal expression,
folding, association of the beta subunit
with the channel, but abolishes its
catalytic activity, the rescue no longer works. So the insomniac flies
remain sleepless. This suggested to us that
hyperkinetic sleep-regulatory role must be tied to its ability
to bind this cofactor NADPH and sense changes in
cellular-redox state. From that inference, then
followed two predictions. The first one is that
changes in redox chemistry are expected to accompany
changes in sleep pressure, and second, that
if we could somehow perturb the redox chemistry of
these sleep-control neurons, that should have
consequences for sleep. From this inference, and
these two predictions, and our knowledge of
intermediary metabolism, then also follows the conclusion
that the dFB neurons probably monitor these redox processes
as a gauge of energy metabolism because this
is, of course, where redox chemistry is ultimately
determined, specifically in the way electrons
that are food-derived are handled in the
electron-transport chain of mitochondria. So when we stumbled into this
particular area of research, we certainly needed
a little refresh in mitochondrial
electron transport. And I suspect that something
similar may be true for you. So here's a very
simple refresher of mitochondrial
electron transport. We have three proton-pumping
complexes in the inner mitochondrial membrane-- one, three, and four. One accepts
food-derived electrons, mostly from the Krebs
cycle, but also, from the oxidation of fatty
acids in the form of NADH. And these electrons
are then handed off in a very carefully
controlled fashion due to the explosive nature of
the use of oxygen of combustion from one complex to the other
using two mobile carriers-- ubiquinone, or Q, between
complexes one and three and cytochrome c between
complexes three and four. The proton gradient
that's built up across the inner
mitochondrial membrane is then, of course, used by the
proton-powered turbine, the ATP synthase, which you
see on the right, which phosphorylates ADP to ATP. So what you see
here is a condition where the ATP demand is high. There is a high level of ADP,
and there's a sufficient supply of NADH fuel. So demand and supply
are in balance. But when that is not the case-- so when you have an
overabundance of NADH, but ATP reserves that are
full, and the proton motive force that is large-- then the ATP
synthase slows down. Electrons still get stuffed
into the transport chain at complex one, but
they have nowhere to go. They accumulate, mostly
in the ubiquinone pool, and start to transfer
directly to molecular oxygen, and produce the oxygen-free
radical, superoxide. So we would predict that
these sleep-inducing neurons during the state of waking,
when, as you remember, Sandman is inserted
into the membrane, shunts their electrical
activity-- so prevents them from producing energetically
costly action potentials. But the animal
being awake has just had its breakfast or
its lunch and therefore, has ample caloric
reserves that lead to exactly these conditions. That electrons are fed into the
mitochondrial transport chain, but somehow, there is little
demand for ATP synthesis, and so that should render
these neurons particularly prone to mitochondrial
oxidative stress. To test this idea, we
filled the mitochondria of the sleep-inducing
neurons with a protein called "MitoTimerm"
which is the derivative of the green fluorescent
protein, whose chroma four converts irreversibly from
green to red as it's oxidized. So this is sort of an
integrative indicator of mitochondrial oxidative burn. They express this
protein in these 24 sleep-inducing neurons,
and then imaged as [INAUDIBLE] dendritic fields. What you see here
are two photon stacks through the dendtritic fields
of twelfth [INAUDIBLE] flies, whose sleep histories differed. And you can see that in flies
that have been kept forcefully awake-- that's just the top row-- there's a clear redshift of
the MitoTimer fluorescence, suggesting that these
sleep-deprived animals indeed suffer a larger degree
of oxidative stress than well-rested
flies at the bottom. Now what we also noted
is when we measured the basal sleep of flies
that expressed this reporter protein in the mitochondria,
just of these 2,000 sleep-controlled neurons,
is that there was a small, but significant,
observer-effect present, namely flies that had
MitoTimer in their mitochondria lost a small but significant
average of about two hours of daily sleep. And we suggest that
this reflects the fact that as MitoTimer is
oxidized, it actually acts as a buffer for
oxygen-free radicals. And it's the consumption of
these oxygen-free radicals that's reflected in
a reduction in sleep. To test this notion
more clearly, we looked for better
tools to do this. And probably, the best there
is a plant-derived molecule. So this is another
theme, I guess, today, that many of the best tools
come from unexpected parts of the kingdom of life. So many plants have
bifurcated mitochondrial electron-transport chains with
a second terminal oxidase. Our terminal oxidase
is complex IV. And plants have an
alternative oxidation called AOX that taps directly
into the ubiquinone pool and acts as an
overflow valve when there are too many electrons
accumulating in that pool. So it's not an uncouplet. It doesn't interfere
with energy metabolism. It simply takes electrons
that are surplus and detoxifies them
by transferring them to molecular oxygen
and producing water. So when we introduce
this particular molecule into the inner mitochondrial
membrane of these 24 neurons, you see that the sleep
loss was dramatic, almost eight hours per day. So capping mitochondrial
reactive oxygen-species production at the
source indeed, seemed to ease the pressure to sleep. Now in animals, the typical
antioxidant defenses are two enzymes-- superoxide dismutases. We manipulated both. I showed you the
results with just one. Superoxide-- this mutates one. The cytoplasmic
form, which exists in an antioxidant form, which
has the predicted effect, namely a reduction in sleep-- but there is also
a point mutation that turns the antioxidant
into a prooxidant, and introducing this
particular variant has the opposite effect,
namely it increases sleep. But the increase
in sleep is blocked if we remove either the
potassium channel beta subunit hyperkinetic or the
alpha subunit shaker from these neurons. So Altogether we think these
behavioral and imaging results suggest that the potassium
channel beta subunit indeed couples mitochondrial
electron transport to sleep. Now, you probably ask
yourselves, how can it be that am extremely short-lived
agent, such as superoxide or hydrogen peroxide,
lives a very short lifetime because it's so highly reactive,
can serve as a signal that is conveyed from the
mitochondria electron transport chain all the way to a
potassium-channel beta subunit that's suspended from
the plasma membrane. What we think is
that we're probably missing a crucial biochemical
link in this signaling chain. And we also have a
hypothesis as to what this particular
intermediate might be. We think it's lipid
prooxidation products that are derived from
mitochondrial-membrane lipids. These lipids are some of
the most susceptible targets of oxygen-free radicals. And importantly, they
produce compounds such as the 4-oxo-2-nonenal
that you see, which are established
potassium-channel beta-subunit substrates. The beta subunits are oxidal
reductases, or specifically, aldo-keto reductases. So they take carbonyl compounds
and reduce them to the alcohol. And the oxo-2-nonenal is
obviously an aldehyde, so it has the correct chemistry
to be reduced to the alcohol. And that reduction would then
be coupled to the oxidation of NADPH to NADP+. There are several additional
pieces of evidence that suggests that this
is a likely candidate. One is that Rod MacKinnon,
in the discussion of his structure, notices
that the active site of the beta subunit is
unusually hydrophobic. And he also notices that there
is an ill-defined electron density in the active site. To me, this suggests that it's
a lipid-- or a lipid mixture that's bound to the crystal. And, of course, since fatty
acids are heterogeneous, the breakdown products that
will be produced by them through pure oxidation. It will also be
heterogeneous and not produce a clear diffraction
pattern in the crystals. So the idea, then, is that the
molecular signal that conveys rising sea pressure in these
neurons is the progressive oxidation of the cofactor at the
potassium channel beta subunit from NADPH to NADP+,
and that that, somehow, is linked to the
induction of sleep. So we, of course, looked for
ways to test this idea causally and found an optogenetic tool--
or adapted an optogenetic tool-- that would allow us
to do precisely that, and through a pulse of
light, flip the redox state of the cofactor that's bound to
potassium channel beta subunit. The tool that we
used was developed by the late Roger
Chen, who's also been mentioned several
times already today, as a genetically
encoded contrast agent for electron microscopy. The tool is called miniSOG,
or small SuperOxide Generator. It's an engineered
flavor protein which we have, in
this case, anchored with a lipid modification in the
leaflet of the plasma membrane. In close proximity to
the potassium channel. And then, upon illumination,
we expect this tool to oxidize the NADPH
cofactor, either directly or via a locally produced lipid
peroxidation in the media. And that, of course, then,
if our idea is correct, should put flies to sleep. And as you can see in these
experiments, it indeed did so. The crucial column to look
for is the one in the center. In all cases, we measure
sleep in individual flies-- each row is one fly-- for 30 minutes after an
initial 9-minute exposure to blue light. And you can see that, compared
to their parental controls, flies that have
miniSOG go to sleep in much greater
proportion and for longer than the parental controls. Once again, the
effect is blocked by the removal of hyperkinetic. That's the fourth
column from the left. But it's not blocked
by the removal of an innocuous potassium
channel, KD4 [INAUDIBLE].. Now the ability to set the
redox chemistry of the cofactor directly throughout
the genetic tool then also opens
the door through-- to biophysical studies
of what actually happens to the excitability
of these neurons as we flip the state
of the cofactor. So we're able to patch onto
one of these sleep-inducing neurons, and then, again, after
9 minutes of illumination, measure either, in current
clamp, its spiking behavior, or in voltage clamp,
the characteristics of the voltage-gated
potassium currents. This is an example of a neuron
you can see that, clearly, after illumination, the
spike rate increases. The input-output
function steepens. The inter-spike
interval contracts. So in other words, the neuron
becomes much more vigorously electrically active. And the biophysical
change that underpins all of this in the voltage-gated
A-type potassium current is a lengthening of the
inactivation time constant. So the potassium channel starts
to inactivate more slowly with an oxidized cofactor
than with a reduced cofactor. Now when Chuck Stevens
and John Connor defined the A-type
current in 1971, they included a
modeling study in which they proposed that
the A-type current is the main determinant of
the inter-spike interval of persistently active neurons. And conceptually--
or intuitively-- the way to link inactivation
kinetics to firing rate is in the following way-- you need a powerful
A-type current to restore the membrane
potential to its resting level after a spike. If your neuron is persistently
active with each spike, you will push a certain fraction
of your potassium channels into the inactivated
state, and therefore make them unavailable during
the next repolarization event. By slowing the inactivation,
you keep a larger fraction of your channel population in
the conducting, active state. And that allows you faster
repolarization, and therefore higher spike rates, and in
this particular physiological context, deeper or longer sleep. If we express just
GFP and not miniSOG, you can see that
light has no effect. But the changes that we
saw upon illumination within cell
experiments were also reflected in between cell
recordings where we just compared the
properties of neurons-- this is the bottom row now-- that either express the
catalytically active or the catalytically
dead rescue transgenes. So you can see that the
catalytically active rescue transgenes, again,
cause high spike rates and slowly inactivating
A-type currents. And the same is also
true for manipulations of the cells' ability
to either prevent the production of
reactive oxygen species or to induce them
with the presence of this pro-oxidant
version of SOT1. So this suggests that
there may, in fact, be a direct mechanistic
connection between rate of living and sleep, which
is not entirely unexpected given the
epidemiological evidence. Many things that
cause oxidative stress have been implicated in aging
and degenerative disease. And of course, chronic
sleep deprivation has also been implicated as a
cause of shortened lifespan. So possibly, this
is the mechanism that might link these
two important phenomena. So we've reached a
stage where I can return to this
conceptual animation and try to replace it, for you,
with a molecular, mechanistic animation. On the next animation, that
will be quite a bit going on, but I'll talk you,
slowly, through it. So the crucial regulator
that determines whether this sleep-inducing
neuron is in fill or discharge mode, and the animal
awake or asleep, is the sandman channel,
shown here in yellow, which can be either
in the plasma membrane or in intracellular vesicles. We know that dopamine
drives the internalization. And we are feverishly
working to find the signal that causes the
endocytosis of the sandman channel. So when sandman is in the plasma
membrane, spiking is blocked. And the cofactor of the
potassium channel beta subunit population gets progressively
oxidized to NADP+ as a reflection of the operation
rate of the mitochondrial electron transport chain. Now I mentioned to you, before,
that these beta subunits are probably the lousiest
enzymes known to man. And of course, that's
exactly the property you would desire if you were to
construct a system like that. Because what you need is
a biochemical memory that holds on to each
oxidation event, and out of multiple
of these events, then constructs an analog
measure of the accumulated sleep pressure. If the enzyme was
catalytically active, each oxidation would be
fleeting and ephemeral. And your accumulated sleep
pressure would disappear. Now through this ability
of the beta subunit to communicate with
the inactivation gate of the channel and to
regulate the inactivation time constant, the
same process also automatically determines the
commensurate corrective action. Because it is the fraction of
the hyperkinetic pool that's been oxidized that
determines the kinetics of the A-type current, and
therefore the spike rate of the neuron. Now one particularly important
aspect of a system like this is, of course, that the
accumulated sleep pressure somehow has to be dissipated
when the animal actually goes to sleep. And what we think a particularly
beautiful way of accomplishing this is by coupling
the enzymatic activity of the beta subunit to the
voltage-driven rearrangements of the alpha subunit. So as you see on
the animation here, when sandman moves
out of the membrane, the neuron becomes
electrically active. The voltage sensors
start to move. These conformational
changes, we think, get transmitted to
the beta subunit. And suddenly, an escape path for
the oxidized cofactor opens up. NADP+ gets kicked out,
gets replaced with NADPH. And the animal
wakes up refreshed, with its cofactor
pool replenished. Before I finish, I'd like to
return, for just a minute, to the very Stone
Age of optogenetics. When we started to work on the
first optogenetic actuators, I became aware, through
a citation alert, to the synapto-pHluorin paper
that Ben Heidel mentioned, of a quotation that
Ben Heidel also already mentioned in his introduction. And that showed to
me that I was not the only scientist who had
seen the need of technologies like this. And obviously, Francis
Crick, in an essay entitled "The impact of molecular
biology on neuroscience," which was published in
the millennial issue of The Philosophical
Transactions of the Royal Society, wrote, as
Ben Heidel already said, "The next
requirement is to be able to turn the firing of
one or more types of neuron on or off in a rapid manner
in the behaving animal. The ideal signal would be light. This seems rather far-fetched,
but it is conceivable that molecular biologists could
engineer a particular cell type to be sensitive
to light in this way." So when we had the first
experiments that turned that far-fetched possibility
into a reality-- and these are these experiments. These are hippocampal
neurons grown in culture, transfected
with an opsin protein taken from the fly eye. Because the correct
gene for general dobson had not yet
been identified. And GFP-- so we
then patch onto one of these transfected neurons. You can see that, in the dark,
it sits around resting value. But as soon as we
turn on the lights, there's a depolarizing step. And the neuron responds with
a volley of action potentials. So when we had
these experiments, I sent Crick a
pre-print of our paper. And if you've read this
wonderful book called The Eighth Day of
Creation, which recounts the early days
of molecular biology, you know that Crick was a
prolific letter writer who steered the development
of many fields through a vast correspondence. And his stylistic hallmarks as
a correspondent were twofold. He was always encouraging,
and he was also constructively critical. And that's exactly what I got. So he wrote back to
me and said that he read the paper I had sent
him with great interest and was excited to see that
the system already works, at least to some extent. However, he realized, as I
did, that it still needed improvement and that this was-- and that this would
take further work. Unfortunately, Crick
didn't live to hear how not just our
experiments progressed, but those of many others. But I think it's fair to say
that the improvements have come, thanks, in large
measure, to my co-laureates and that, as a result
of all these efforts, the way the neuroscientists
go about our business has fundamentally changed. With that, thank you very much. And thank you too. [APPLAUSE] To the members of
my group, I'd just like to mention a few
of the key individuals. So the current crew is
aligned to the left. And some of the notable
alumni have been displaced by one tab stop to the right. Boris Zemelman was
the postdoc who made the first optogenetic
actuator, Susana Lima, the graduate student who used
them in the behaving animal. And the recent work on sleep
was done by, initially, two postdocs, Jeff Donlea
and Diogo Pimentel, and the more recent work on
the redox control of sleep by a postdoc, Anissa Kempf, and
a grad student, Michael Song. Thanks. [APPLAUSE] Thanks, Carol, for
that wonderful talk. So our next talk is
from another postdoc. It's by Dr. Charlotte Arlt.
She is originally from Germany, did her undergraduate work
at the University of Cologne, and then went from there
to do her PhD with Michael Houser at University
College London, and then came to
Harvard and joined the laboratory of Chris
Harvey in the department of neurobiology. And she's going to
tell us about how she uses light to read
out and manipulate the activity of
neurons in an effort to understand
decision-making processeses in the brains of a mouse. [APPLAUSE] Thank you very much
for the opportunity to share our recent work. It's truly an honor. And I'm excited to
be able to share our work at this occasion. Coming back to the
theme of racket sports, we make decisions in our
everyday life all the time. If you think of being a tennis
player like Roger Federer, for example, which I
do on a daily basis, he has to make up his mind of
hitting the ball to the left or to the right
over and over again. But the process by which
he arrives at a decision might very much depend on the
context in which he makes it. So in this case, imagine
you're Roger Federer, and you are in a
training situation. Your coach is across the net. And the coach instructs
you to always hit the ball to the exact same spot. So in this case, the mapping
from sensation to action is very straightforward to you,
guided by key instructions. But now imagine you're
Roger Federer again, and you're playing in the
Wimbledon final against Rafael Nadal. The ball is coming towards
you in the exact same way as in training. And you might hit it
to the exact same spot. But now what's
guiding your decision is a complex model
of your environment, including a model
of your opponent and statistics of
a match like this. So this combination
of sensory input with some internal knowledge
or experience to guide action is what we think
of as cognition. And in the Harvey
Lab, we would love to understand how this
process is implemented in the nervous system. We understand this is a
very ambitious question, so we try to tackle it by asking
two concrete questions here in this talk. Firstly, we want to know
what brain circuits actually mediate such seemingly simple
decisions as hitting a ball to the left or to the right. And once we've identified
these circuits, we can then ask, how
does context actually affect the implementation of
such a decision-making process in these very circuits? And what I want to
tell you about today is how we search
for such circuits mediating simple decisions. And to our surprise, we
found that the identity and the involved brain
areas mediating decisions would change depending on the
context and the experience of the animal. So for the remainder
of the talk, I'd like to tell you how we
arrived at this conclusion. Wanting to study brain
circuits for decision-making, we don't have tennis players. But we train mice
to make decisions by running to the left
or to the right in mazes. We think this is quite
a naturalistic behavior for animals to
display, because that's what they need to do in the
wild to survive as well. And once we train mice to
make decisions in that manner, we can then inhibit and silence
different parts of their brain and ask, what brain areas
are they actually using to guide their decisions? And you can imagine
that experimental setups to manipulate different brain
regions in the same mouse might be quite heavy and
difficult for an animal to carry around. So instead of having the mouse
run freely in the environment, we actually move the
world around the mouse and keep the mouse stationary. So here you see an animal in
such a virtual reality setup. It's running while
being head-fixed. And the Styrofoam ball
that it's running on, the movement of this
ball is translated into movement of
the virtual world that we project on
a screen surrounding the animal's field of view. So animals are using
the visual feedback that they get from this world to
update their running patterns. We can use the
system, now, to train mice to turn either
left or right in a very simple Y-shaped maze. And we train them to use
visual associations of cues that they see on the maze walls
with rewarded turn directions. So in the top example,
you see the mouse is seeing vertical cues
on the side of the maze. And in this case, it has to
run left at the end of the maze to get a reward
once it gets there. In the bottom case, it sees
the opposite trial type of vertical cue. And then it has to
run to the right. So let's see what this
actually looks like in action in a trained mouse. The animal in the left case
encounters this horizontal bar, successfully runs to the left,
where it's supposed to run, gets some visual feedback of
the correctness of its choice. And then a little drop of milk
is dispensed to reward it. And a few trials
later, it encounters the opposite trial type,
those vertical bars, chooses to run to
the right, which is the correct
decision in this case, and again, gets
rewarded at the end. Now that we have
animals making decisions in this virtual
reality setup, we can expose their brains
by removing the skin from the skull, and thereby
gain optical access to the brain surface right underneath. We're using mice that express
channelrhodopsin, what you've heard about a lot so
far now, specifically in GABAergic neurons
in neocortex. So these neurons are the
small interneurons here, depicted in green, that inhibit
the parameters population. And the parameter
neurons are normally the ones that carry information
from the local circuit out to different brain regions. So when we now shine light
of the appropriate wavelength onto the skull, we can
activate the interneurons through the skull. And those interneurons, in turn,
inhibit the local population, thereby silencing a given
volume of the brain. And we can do this with very
high [INAUDIBLE] precision, again, as you've heard
in the previous talks. Here the light is on,
indicated by the blue bar. For just a few
hundred milliseconds, the interneuron that
you see in the top row there responds
reliably and strongly. And the parameter
neuron simultaneously recorded underneath is pausing
and spiking at the same time. And at this point,
I'd like to thank the pioneers of optogenetics,
whom we're honoring here today. Because none of the experiments
that I'm about to describe would have been possible
without their contributions. And even after running these
experiments for quite some time now, it's still astonishing to
be able to remote control brain activity in a living
mouse making decisions. What areas do we
actually want to inhibit? We focused on a
few candidate areas for decision-making, one of
them being the parietal cortex. The parietal cortex
gets sensory input for many different
modalities, and in turn, projects to different areas
implicated in action selection. So it's really at the
intersection of sensation and action, and has been
implicated in decision-making across species. Another region we're focusing
on is the retrosplenial cortex, given that we're using
a navigation-based decision-making task. Because the
retrosplenial cortex is at the interface between
subcortical systems for navigation, such
as the hippocampus and the entorhinal cortex,
and other cortical regions, including the parietal cortex. We can now couple
our blue laser light to a system of mirrors whose
position we can change very quickly to steer the laser
beam onto different target locations on the skull. So in one trial,
for example, we may be inhibiting parietal
cortex on both hemispheres here while the animal
is running down the maze and choosing to
turn left or right. And then in the next
trial, we can quickly change the mirror positions
and steer the laser beam onto a different region--
here, retrosplenial cortex. We choose the order of these
target locations randomly. And we also interleaf them
with controlled trials where we steer the laser
beam outside of cortex. We have another control trial in
some outer sensory cortex where we inhibit a local volume
that supposedly isn't involved in making these
types of visual association decisions. So given that we
have a system now where animals are
making decisions in this virtual reality, and
we can inhibit different brain areas, we can finally
ask, what area is actually necessary for making this simple
type of navigation decision? And we do this by
subselecting trials where the laser beam was
in a particular location and then quantifying
the average performance of the animal in those trials. So we quantify
performance as fraction correct where 1, or
100%, means the animal is making no mistakes. And 0.5, or 50%, means
the animal would perform at chance level, either
just guessing randomly or continuously running
to the same side. But you see here,
in the control case, the performance is very high,
far away from chance level. The animal is making very
few mistakes, meaning, one, that it knows this
task very way, but two, also that it's not distracted
by blue light in general. When we now inhibit some
outer sensory cortex, we see a very similar
picture, indicating that the animal is not
relying on spiking activity in that area to guide
its navigation decision. When we inhibit
retrosplenial cortex, we see a very
different picture now where, every time, on average,
when we inhibit this brain area, we are causing the
animal to make mistakes in this type of
decision-making task, indicating that it's
actually relying on activity in this area
to guide its decisions. And now quite surprisingly,
when we inhibit parietal cortex, the animal can still
perform the task very well, indicating that
activity in this area seems to be dispensable
in this setting, and the animal can do
quite well without it. So having identified, now,
the retrosplenial cortex as an area that's mediating
these simple types of decisions, we
can now go ahead and modify the context in which
the simple decision is made. So we create a
flexible context now where, on top of
the two associations that I've showed you
before, sometimes the animal has to make
the opposite choice given the same visual cue. In a given experimental
session, we introduce these two
pairs of associations in blocks of tens of trials. And once an animal
has been trained in this setting for
quite some time, it can perform
quite well, mainly making mistakes, really, at the
change points of these blocks. So after a switch of
the association block, the animal makes a few
mistakes, because it's still using the old association. Then it realizes it doesn't
get rewarded this way, and updates its
strategy, and uses the new pair of associations. So towards the end of
each of these blocks, the animals are performing at
a very high percent correct. And their decisions,
outwardly, look very similar to the
ones that animals made in the simple context. And let me just show you how
similar those decisions look. In the left case,
you see an animal that was trained in
the simple context. It encounters this
horizontal cue and has to turn to the left. In the right, you
see an animal that was trained in the flexible
context, encounters the same trial type, also
has to turn to the left. And when you just look at
these movies side by side, they really look
identical, indicating that the types of decisions,
outwardly, these animals make are very similar. So now as a sanity
check, we first wanted to see if,
again, retrosplenial is actually mediating decisions
in the rightward case. So we inhibited the same
targets as I showed you before, but now
specifically, towards the end of these blocks, where
the animal is performing at very high fraction
correct, meaning it knows the association well. So again, in the
control setting, or with some outer
sensory cortex inhibition, the animal is performing
the task very well. Again, with retrosplenial
cortex inhibition, we induce many errors. But now you see the drop
is quite large, actually close to 50%, meaning the
animal is almost performing at chance level. So in this flexible
context, it seems to especially rely on activity
in this part of the brain to guide its decisions. But now, quite
surprisingly, when we inhibit parietal
cortex, we also see a very large
drop in performance. And again, we didn't
see this type drop in performance in
the simple context. So in the flexible
context now, it's not just retrosplenial cortex
guiding the animal's decisions, but the animal is also relying
on this additional brain area, the parietal cortex,
which, just to remind you, in the simple context, for the
very same outward decision, it didn't need. And given that we were quite
stunned by this result, we wanted to make sure that
we understand what's going on. It seems like the current
cognitive context dictates whether this
parietal area is also necessary for decision-making. Now if that's true, we
should be able to take the exact same mouse, change
the cognitive context it's experiencing, and thereby
change the number of brain areas that are involved
in the decision. So to test this,
as a sanity check, we took an animal
that was trained in the flexible context. And again, to remind you, with
parietal cortex inhibition, we see a very large
performance drop. So the animal is using
this part of the brain. And then we
transition this animal to the simple
context for 14 days, where it's not experiencing
any association switches. In the control case, of
course, the performance stays very high. But now, stunningly, when
we inhibit parietal cortex, we continue to see the
strong effect on performance, suggesting that
the brain persists in using this decision-making
area for this very simple decision
even weeks after we switched the animal from the
flexible to the simple context. And we thought this was
quite stunning, especially, again, when we compare
the lack of effect in the animals that were trained
in the simple context here. Because again, animals trained
in the simple context who have never seen the
flexible context do not rely on this brain area,
the parietal cortex, at all to guide their decisions. So in addition to the current
context dictating what brain areas are used
to make decisions, context can also have a
very long-lasting impact on what brain areas are used to
guide the same simple decision. Finally, we wondered
whether, maybe, we see this very large
effect of context. Because perhaps we're looking
at a very extreme case here. Or maybe it's some
special case where we're comparing this simple
context to a flexible context. It could be that, perhaps, the
flexible context is especially demanding for the
nervous system, where the same cue has to be
mapped to opposite choices. So we created another context
where the animal doesn't have to reverse its
choices, but we just create a more diverse context. So we keep the two associations
that you've seen before. And then we add two
more associations with different visual cues. But now, importantly,
the mapping from the visual cue to the
choice of the animal that's rewarded is constant. So finally, we ask what
brain areas the animal relies on in this type
of decision-making. And again, we see not only
retrosplenial cortex, but also parietal cortex being
used by the animal to guide its decisions. So it seems that
context, in general, has a very strong impact
on what brain areas are used for decision-making. And context can be
changed in various ways. You can increase
diversity of context, or you can make the
context more flexible. And probably, there
are many different ways on top of these two
variations we've shown you that would change the number
of brain regions involved in decision-making. So we think we've
seen something quite interesting about
the brain here, namely that it can implement
the exact same decision from the outside using
completely different brain circuits. And this suggests that
the brain is actually tremendously flexible and
shaped in very profound ways by context and experience. Because we're not just
talking about changing the synaptic connections
between individual neurons here. We're talking about using
entirely different sets of brain regions for
the very same decision. And we think this might have
some important implications for how we want
to study cognition as systems neuroscientists. Because we have shown that
behavioral task design, details, and training
history really matter. If we have two animals
that perform this outwardly seemingly identical
task, they might actually be relying on
different brain regions to do so depending
on their experience. So we need to control for
experience much more carefully. But in addition, we are also
suggesting that perhaps we should leverage this
diversity to create more diverse and naturalistic
laboratory settings in which we study decision-making. Now where do we
take this work next? With Sofia Soares,
in the Harvey Lab, we've built a special
microscope that allows us to image different
brain areas simultaneously. So we're currently
using this approach to look at activity in all
those decision-making areas that I've talked to
you about previously. And we're asking how
activity in those areas, but also across those
areas, may differ depending on the cognitive context
in which animals are making the seemingly same decisions. And another interesting
research direction taken by other lab
members is trying to actually get to
what's the inner workings of those individual areas. So all the
optogenetic approaches that I've shown you
today were quite coarse, silencing entire brain regions. But Selmaan, in lab,
has developed a method where he can activate
individual neurons one at a time while monitoring the activity
of the surrounding tissue. So here you see,
from left to right, he's intentionally, with
lights, activating one neuron at a time in the living brain. And he can use this
technique to ask questions about the function, and
micro-circuit architecture, and about computations
that a given circuit may be performing. And Dan Wilson,
in the Harvey lab, has taken this approach
a step further by now activating 10 neurons
simultaneously, here shown with the blue arrow. And he's doing this as animals
perform decision-making tasks. So he can ask what
the causal link is between the activity
of individual neurons that he can functionally
identify, for example, neurons that respond to a visual
a cue that the animal is using to guide its choice. So you can link activity
of those neurons, now, to the network
activity, but also to the performance
of the animal, trying to draw causal
links between activity of individual neurons
and cognition. And with that, I'd
like to thank everyone who's contributed to this
work, first and foremost, Roberto Barroso-Luque. He was a research
technician in the Harvey Lab whom I very closely collaborated
with on this project and who is now off
to grad school. But he really helped to
push it forward and push it in all different directions, to
the scale that you saw today. I'd also like to thank
Chris Harvey, who's been a tremendous advisor on
all aspects of the project, from training mice all the
way up to preparing this talk. I really value his input,
and advice, and his passion for science in general. I'd also like to thank Selmaan,
who originally designed this flexible
decision-making task that inspired the whole
project, really, and then the whole Harvey Lab community
for fun, scientific discussions and great feedback, our
funding sources, the research and instrumentation core, and
the medical school community in general. [APPLAUSE] Thank you, Charlotte. That was wonderful. So our last speaker is Dr.
Karl Deisseroth from Stanford. He is the Chen
professor and chair in the departments of
psychiatry and bioengineering, an interesting combination
which really defines his career. Karl did his undergraduate
work here at Harvard, majoring in biochemistry,
and then went to Stanford to do a combined MD/PhD. He also worked with Dick
Chen, as did Ed, and studied the coupling between
activity of neurons, calcium entry, and cellular processes. And this is where I
first got to know him. Because I was doing similar
work here with Wade Regier. After finishing graduate
school and the MD, he did clinical
training in psychiatry and is still a
practicing psychiatrist. As we've already mentioned,
he, along with Ed and Feng, were the first to
put channelrhodopsin into mammalian neurons
and show that they could control the excitability
of those cells using light. Since then, his
laboratory has really led a steady drumbeat of
developments in optogenetics over the years, in
which he's produced literally dozens of different
kinds of optogenetic actuators that we can use to manipulate
the activity of cells. His lab has also produced
light-activated G protein-coupled receptors,
step function options, and many, many other tools. Separately, his laboratory
also invented the light clearing-- the brain clearing
approach CLARITY that's now being used
ubiquitously to look at the structure of the
brain in intact organs that don't need to be sliced. This has become a very
powerful technology. And as you'll see in
a minute, in addition to inventing technologies,
Karl's lab has constantly used these to make
fundamental discoveries about the organization
of the mouse brain, and I think, led by his own
experience as a psychiatrist, has really begun to reveal
how the animal not only makes decisions in a normal
state, but also how this goes wrong in
some pathological states. Karl, thank you. [APPLAUSE] All right, thank you, Renardo. Very grateful for
this tremendous honor. And heartfelt congratulations
to my fellow prize winners. This is a wonderful moment. So thank you for all
you've done over the years. I want to-- since
going last, you have heard a lot about things. So I'm going to move, as quickly
as I can, to the present day without spending too
much time on the past. I do want to talk a little bit
more, in even greater detail, about the inner workings of
the channelrhodopsin protein itself. The progress of this
field has been very rapid. As recently as 2011,
we did not know much about the inner structure
of channelrhodopsin. But things have
progressed very quickly over the ensuing six years. We now know a great deal. Here's that retinol binding
pocket that Peter mentioned. This is the ion pore, lined
with charged and polar residues, including these five
glutamates E1, E2, E3, E4, E5. And attaining this level of
understanding of the protein is, of course, exciting in
its own right for people who care about
proteins, and molecules, and amazingly, elegant
natural machines like this, but of course, also has led
us to be able to change them, change their properties
fundamentally in ways that really matter and are useful. For example, we were
able to make them faster, as Peter mentioned-- so
here, going up to 200 Hertz, spiking with a very fast
mutant, as we described in 2010, getting
red-light-driven spiking, as we did in 2011, together
with Peter Hegemon and Ofar Izar, who's here today,
getting this bistable operation with the step function
tools, flipping cells into and out of
excitable states, and then making the
channelrhodopsins inhibitory, and then making that, in turn,
be bistable inhibition as well. And all of these stemmed
from molecular modeling, structural determinations, and
great deal of work, most of it in collaboration with Peter
Hegemon and many other very talented colleagues. I want to touch on two aspects
of this scientific journey that were particularly
useful and relevant to modern neurobiology. Of course, a big part of it
was getting these three crystal structures. When we got the 2012
one in collaboration with [INAUDIBLE] Cato,
Feng Zhong, Ofar Izar, Shar Ramakrishnan, and
Osama Nurekee, we saw, right away, it
was a dimer of two 7-transmembrane proteins. Each one had its own
retinal, its own pore. But we also-- in seeing the
pore, we, for the first time, had the opportunity
to change it. It had been prominently
hypothesized that the pore might lie-- rather than within
each monomer, might lie at the interface of a
dimer, or even a trimer. This turned out to be wrong. But of course, not even
knowing where the pore was, it would be very hard
to re-engineer it. And we were able to do that. In looking at the inner lining
of the pore from our structure, we could see that
it was largely lined with polar, but also residues
that would be predicted to give rise to negative
surface electrostatic potential in the internal
lining of the pore and in the inner and
outer vestibules. And this led to an idea to
change the ions' selectivity. This wild-type channelrhodopsin
was a non-selective cation channel, as you've heard,
fluxing sodium, potassium, not calcium and protons. And Andre Barent and
Su Li, in my lab, worked hard to change that inner
lining to be more positive. And they succeeded. Against all odds, this came out,
along with a beautiful paper from Peter Hegemon-- a similar
result and different mutations, both ending up in creating
this chloride conducting channelrhodopsin
capability, which allows one to deliver
blue-light-based inhibition of spiking. And together with Peter,
we optimized these further and created the step
function inhibition forms of these inhibitory
chloride conducting channelrhodopsins. So these have been,
now, widely used. For example, together with
Will Allen, in my lab-- Will is now here as
a Harvard fellow-- we were able to
use this fast IC++, the next-generation version
of the chloride channel, to identify the causal roles
of neurons that are involved in the fundamental
survival drive of thirst. Now this is just one example. Getting to the
inhibitory chloride connecting channelrhodopsins
in 2014 was one step. But then a very
interesting thing happened. The following year,
John Spudich's lab identified naturally
occurring chloride conducting channelrhodopsins
from Gulliardia theta. And just last year, we were able
to get the crystal structures of both the naturally
occurring chloride channel and the one that we had
produced together with Peter. And this gave us a very
interesting insight into both the natural
and the designed chloride conducting channelrhodopsins,
in particular that both the engineered one and the one
that nature had developed, in fact, used this principle of
surface electrostatic potential within the pore, and
also at the surface vestibules of the channel
pore, to exclude, in this case, likely, anions and [INAUDIBLE]
cations, in this case, to create a anion
conducting pore. So this revealed-- and we were
able to both convert anion conducting channelrhodopsins to
give them cation selectivity, take cation conducting
channelrhodopsins, give them anion
selectivity, all based on this structure-based
analysis of the pore. So at this point, we
understand the pore, at least to some extent. And as you'll see
later, that's even helped us screen for, and
identify, and understand new kinds of opsins that have
new kinds of functionality. But I want to talk
about color selectivity as a very important step first. And this red-light-driven
spiking, in part, depended on discovery
of a red-light-driven channelrhodopsin,
which was work, again, in collaboration with
Peter and with Ofar Izar, but work led by Feng
Zhong in my lab. In 2008, he found
this red-light-driven channelrhodopsin from a
multicellular green algae called Volvox carteri. And this enabled,
ultimately-- although we didn't realize that it would
have this impact at the time, this ultimately
enabled us to even move beyond what Crick's initial
concept of the utility of light control might be. And this has been shown
a couple times already. But I want to focus on a
different aspect of it, of Crick's initial statement. He very clearly focused on
types, types of neurons-- engineer a cell type. And indeed, this is very useful. And indeed, this is how the
vast majority of optogenetics has been done around
the world, allowing one to turn on or off genetically
targeted cell types. But he didn't,
even in this piece, describe a control of multiple
individual neurons, which is what, ultimately,
the red-light-driven channelrhodopsin have done
a great deal to enable. Of course, in the very
first experiments, we controlled single cells. But this was in-- with a readout of a patch
clamp, a pipette in culture. And here, showing some of
those very early experiments, here is the small
group back then. Here's myself,
and Ed, and Feng-- the good old days. With Mike Greenberg
in the audience, it's nice to point out
that the initial readout of membrane depolarization was
CREB Ser133 phosphorylation. I was initially a patch
clamper, but I also did a lot of work that
ultimately followed up on Mike's identification and
creation of reagents that allowed us to study this very
interesting phosphorylation event. So let the word go forth,
from the Warren Alpert Symposium, that Mike Greenberg
helped launched optogenetics. So thank you, Mike. [LAUGHTER] But then, of course, Ed's
gorgeous spiking recordings, Fong's elegant
design of viruses, and his design of fiber-optic
interfaces to allow us to control in behaving animals-- and that led to this initial
control of mammalian behavior in 2007, where this illumination
of supplementary motor cortex M2 on the one-sided
animal causes the animal to rotate in the
opposite direction. As soon as the little
blue light turns off, the animal stops rotating. Now even this was with
control of types of cells. In this case, the
photosensitive cell population was layer 5 cortical neurons. And we went on from there to
target deep hypocretin neurons in the lateral hypothalamus-- again, work led by Feng Zhong
and Antoine Adamantidis-- but all of this cell types. And what ultimately turned
out to be particularly useful in opening the
door to single-cell was a derivative of the
initial Volvox channelrhodopsin that Ofar, and
Peter, and myself, and several in our group
described in 2011 called C1V1. It's a chimera of
channelrhodopsin-1 from minimonis and Volvox
channelrhodopsin-1. Roheat Prekash, in my lab was,
able to express this in 2012, here using a patch clamp
electrode and loose patch configuration in a
awake and living mouse, and doing raster scanning
to photon illumination just above the cell, not
getting spiking, within the cell,
getting spiking, and just below the cell,
not getting spiking-- so single-cell resolution
control in vivo, in mammals. And this was work
in collaboration with Adam Packer and Rafa Usta. And it was back to back with
a paper, also collaborative, between our two groups showing
the first spatial light modulator, a
liquid-crystal-based holographic control
of single cells. But that was in culture. It took a number of
years between 2012 and the present to
actually translate this into the control of
mammalian behavior by control of multiples--
individually specified cells. The path to this led through
all optical experiments-- so using the
red-light-driven aspect of the Volvox-driven
channelrhodopsins and the blue-light-actuated
calcium sensors, like the GCaMP series that
you've heard a fair bit about already today. And we had combined--
in a experiment, together with David Tank, we'd
combined a Volvox-derived opsin together with
readouts of calcium signals, blue-light-driven
calcium signals, the GCaMPs, in 2014. But it was not with
behavior as a readout. It was simply showing
that you could do all optical interrogation
of neural circuits, but not affecting behavior. It was in behaving animals,
but not affecting behavior. It took some time, even from
that point, to get to the point where we could exert control
over mammalian behavior at the level of
multiple single cells. This was work from
earlier this year led by Josh Jennings, and Tina
Kim, and Jim Marshall in a lab. And what we did was target
orbital frontal cortex. This is a part of the
mammalian brain that, in human beings,
if it's lesioned, gives rise to a syndrome called
orbital frontal syndrome, where you can have serious
disregulations in feeding behavior and in social behavior. This is a structure that, of
course, does many other things as well. It's involved in
value-based decision-making. But we were interested in
understanding and leveraging our potential for single-cell
resolution control. Our goal was to see if we
could study the interaction and competition of two primary
drives, primary survival drives, feeding and
social interaction, within this structure. And these cells, as is common
in systems neuroscience, are very often observed to
be active during behaviors, but not necessarily known
to be causally involved in those behaviors. And so that was a big
and open question. So what we did was use
a GRIN-lens-based optics to give us access
to this somewhat deep cortical
structure and exert control over the single cells
while also getting readouts via calcium imaging
of the activity of those individual cells. And under this system,
we can quite readily identify feeding cells. As this trace progresses
at each of the gray bars, that's when a little droplet
of a high-caloric reward is delivered. And we can see cells that quite
reliably respond at the feeding droplet delivery. And so these are, as we
call them, feeding cells. These are cells that always-- or almost always-- respond
when this high-caloric reward is given. So this is their naturally
occurring activity, and this is very useful. They're sprinkled in
among other cells that don't have these properties. And then we can come and
give optogenetic control, leveraging our ability to exert
this single-cell resolution control. And we can do this
as shown here. The non-targeted
cells called NT here, these are cells that are
next to stimulated cells. And this gives you a flavor
of the spatial resolution. You can see the non-targeted
cells, the NT cells, are active in their own
right, doing their own thing. But they're not activated
when we activate, optically, the
feeding cells that are right next to them,
absolutely cheek by jowl with them in this structure. So we have single-cell
resolution control over these cells. And then the question
is, do we have-- are these cells causally
involved in the behavior? And so here's the result
of that experiment. If you-- in this case, driving-- we found if we just drove 20
to 25 of these feeding cells, we could enhance and extend the
feeding response to the droplet where each little tick is a
lick delivered by the animal. Did a number of controls. For example, if you leave
the Volvox-derived opsin out, you don't get that response. So this is-- that's good
news for the experiment. It shows that it's not
an artifact of the light, for example. But of course, you might
ask, does it matter that you're targeting
the feeding cells? What if you had targeted
other cells too? And what if they were
important cells to the animal cells involved in
appetitive drive. And so here, we
wanted to identify the socially responsive cells. And we did that
experiment as shown here. Now this is all done in the
head-fixed configuration. So you might ask, what, really,
is head-fixed social behavior? And it's maybe
not quite the same as natural social behavior. But in this case,
there definitely is a conspecific social
interaction, an interaction with another member
of the species, a juvenile, same-sex
mouse that can move freely around this chamber and
occasionally come in here, where there's an extended
period of whisking and sniffing. And indeed, there
are social cells that are active when this happens. And those are not the
same as the feeding cells. You might also ask, maybe
these are just surprise cells, or novelty cells. And that was an important thing. When I'm very
surprised or shocked, I might not eat as much. And so that's an important
distinction to know. Are these truly social cells,
if that's what we're interested, or something else? And so here we
3D-printed a mouse and had it pop up in quite a
shocking, almost a horror movie type fashion. I'll play this movie here. [VIDEO PLAYBACK] So the mouse is here, waiting. Oh. [LAUGHTER] And what's amazing is the
social cells absolutely do not respond at all, even
to that shocking stimulus. There are other novel
object cells that respond. It's going to come
up again here. And other controls
give us confidence that these are related, in some
way, to social interaction. And then the question became-- [END PLAYBACK] --if you drive those
cells, what happens? Does it increase, or
decrease, or do nothing to the feeding response? And what we found, at least
for the first couple minutes, was there was, in fact, a
suppression of the feeding response by driving
the social cell. So here, in orbital
frontal cortex, there is a interaction between
cells observing, or related to, primary survival drives. If you drive other
cells that are non-feeding or
non-social-- in SNF cells, you see no effect at all. So it does appear to be at
least somewhat a specific aspect of the feeding cells-- of the social cells in
their effect on feeding. So this was exciting,
because we were able to take these
Volvox-derived opsins and finally do an
experiment that we'd wanted to do from the
very beginning, which was to exert control at the
multiple individually specified cell level over
mammalian behavior. But of course,
we've been wanting to push this to yet
further and further levels. One limitation that,
at first, seems almost like a physical
limitation is one has to be very careful with
the amount of light power that one delivers
into living tissue. These are powerful
lasers that we're using to generate these
spots and drive these cells. And if we start wanting to
control more and more cells, we may, if we're not
careful, enter into regimes where we're delivering too
much heat or damaging cells in other ways. So we've been
working hard on this, working on ways to
deliver spots of light over broader fields of view,
which is good in its own right as well, and also
finding, and identifying, and using opsins that need
vastly less light in order to give rise to still
fast, triggered responses. And this has come along
quite quickly as well. Along with device development,
we've been producing very large spatial light modulators that
can give rise to holograms projected over very large
swaths of the mammalian brain, up to a 1-by-1-millimeter
area, which, for example, can cover most of
the visual cortex. These look like this. And these now have enabled
us to do the following sort of experiment. What's shown here are six-- 1, 2, 3, 4, 5, 6-- squares depicting
1-by-1-millimeter areas of primary visual cortex in
an awake, behaving mouse. And these six areas
are listed this way because we're going from
superficial to deep. So this is in three dimensions. And all the red,
circled cells are cells that are, first of
all, coexpressing both GCaMP and a new opsin that allows
us to deliver much less light while still
controlling many cells that I'll tell you
about in a moment. But also, these are cells
that have been picked out by us by virtue of their
naturally occurring activity. So we're able to individually
specify these visual cortex cells by first identifying
the ones we want to control by presenting
visual stimuli to the animal and picking out the cells that
respond in the way we want. In this case, the
circled cells are all responding, are all cells
that we've picked out because they respond
to one orientation of a drifting grading,
but not another. So with this configuration,
we can do the following sorts of experiments. For example, here are
two cells that are more than a millimeter apart. By setting up our
holograms, we can truly stimulate, simultaneously,
two cells that are set this far apart, or
dozens, tens, or even hundreds of individually specified cells
at once in three dimensions. And the opsin that we use is
a very interesting and strange one. This is the first one that
has this unusual property. This is a cation conducting
channelrhodopsin. But its primary
sequence phylogeny puts it closer to
the pumps, actually, than to the other
channelrhodopsins, which, by itself, was interesting. We found this in a collaboration
with Susuma Yoshizawa and Hideaki Kato. It's from a marine organism. And we called it
ChRmine, because we used a structure-guided
genome mining approach, and it's a channelrhodopsin. And carmine, I learned
from my lab manager, is a deep red color, which,
I didn't know that before. You learn a lot
in this business. And I learned a
lot about colors. But it was a name that
actually turned out to be-- although it
looks somewhat sinister, it's actually
quite a good opsin. Don't say "crime." It's ChRmine. So what this allowed us to do,
it was-- its key properties, it's red-light-activated. But it has extraordinarily
high photocurrents, more than 5 nanoamps per cell. Very often, of course,
that's actually too much. And so we can back off
on light power, which is what we ultimately
want, and give rise to currents that drive spiking
at extremely low light power densities. And that lets us in turn control
many cells, tens to hundreds. And I'll show you
how we're able to use this to control individually
specified cells, look at population dynamics
elicited in cortex, and look at behavior. ChRmine has-- although we don't
have a structure for ChRmine yet, it's got these very large
photocurrents in the red. But also, by
homology modeling, we think there are some
interesting features to its likely internal
structural design-- not yet proven. What's interesting is that
most of its predicted surface electrostatic potential
is localized, we think, toward the inner
and outer vestibules and not so much in the
channel lining itself, which may reduce the effective
electrostatic stickiness of the-- inside of the channel
and allow higher ion fluxes while still
giving it selectivity by affecting the access
of ions to the vestibule. We are working hard on
getting the structure. We don't have that yet. But very large photocurrents,
very low light sensitivities-- and this allows us to do this
following sort of experiment where we can have an
awake, alert animal. We present visual stimuli
to it, for example, vertical or horizontal
drifting gratings. We can find all the cells that
respond to vertical stimuli, all the cells respond
to horizontal stimuli, show that they're selective,
and pick those out over 3D across a visual cortex. We can also identify
nonselective cells-- we call these the
random population-- to see that it
matters that we're picking out cells that are of a
particular orientation or not. And then we can
come in and control the cells of one orientation
or another, or random cells, both while imaging thousands
of cells across visual cortex, looking at the
elicited population dynamics, the internal
representations, if you will, of
the visual stimuli, and also, later, as I'll show
you, looking at behavior. What we first found
was something-- we didn't know if it
would be the case. We didn't know if-- by stimulating a
few cells, we didn't know what would happen
to the rest of cortex. Would there be no broad,
generalized response? If one were to stimulate 10,
or 20, or 100 individual cells, would those be, by and
large, mostly the only cells activated? Or would we recruit large
numbers of other cells? And if we recruited many other
cells, what cells would those be? What would be the patterns
that they gave rise to? Would it look like
something natural, naturalistic, as
if the animal were seeing the vertical
or horizontal stripes? Or would it be some-- would it be some other
aberrant pattern of activity? And so to do this, we look
at the population dynamics. It's very high-dimensional
in principle, but you can reduce this into
a lower-dimensional space with a principal
component analysis. And here are the trajectories
that these thousands of cells in visual
cortex take when the animal is looking at
vertical or horizontal stimuli. Here's one mouse
and another mouse. And what we can see is this
is the natural response to visual stimuli. And this is the response
to optogenetic stimulation of just 20 or so
individually specified cells of the same orientation. And what you can see is that the
trajectories in this principle component space resemble
those that are seen during natural visual stimuli. And those are not seen with
random cell stimulation or no stimulation. And it's not just us
looking at this and saying, huh, those look sort of similar. You can train a classifier and
see that the classifier can automatically identify
which orientations were the cells that you
were stimulating based on the population
dynamics of the response. So this was reassuring
in many ways. It was nice to see that
population dynamics at this regional level
elicited by properly targeted optogenetics resembled
those that are given rise to by natural stimuli. It also was quite
interesting that stimulating just 20 or so cells-- and these
are untrained animal cells that haven't been worked
hard with light patterns for long periods
of time that might have gone through plasticity. These are in animals
that are not behaving. We're just looking at
the population responses. And stimulating
just 20 or so cells can give rise to this broad
recruitment of hundreds of neurons among the
thousands that we're imaging from in this
3-dimensional space of visual cortex, which,
by itself, was interesting. And in this paper, together
with Surya Ganguli, an outstanding computational
neuroscientist at Stanford, we've begun to explore
what this means that cortical circuits seem
to exist in this critically excitable regime. But of course, we also
did want to see if we could affect behavior as well. And so for these sorts of
experiments, we take animals, and we train them
to respond to one orientation of the visual
stimulus or another. And we can make
the job challenging by reducing the contrast,
for example, of the grating. And we can generate
psychometric curves. We train the animals
on high contrast, which, they learn
the task very well and perform at high levels. You've seen this
d-prime measure before. But even after training, they
can't do well at low contrast, at 2% contrast. And 10% is fairly
intermediate, and they can perform all right on that. We found a couple of things. First, by optogenetic
stimulation of a population of cells
that was concordant with a weak-contrast
visual stimulus, we could improve the animal's
behavioral performance. We could help it to detect,
significantly more reliably, what the correct orientation
was if we gave stimuli concordant with the orientation
of the visual stimulus. But then we even took away the
visual stimulus completely. And so in darkness,
we asked, just by stimulating a
few cells, can we get the animal to
respond as if it is seeing the visual stimulus? And the answer is it can. And what's more, we could
titrate down the cell number to remarkably low levels. And this is both in terms
of behavioral measures and in terms of looking
at the population dynamics, the classifier
automatically detecting and classifying the
nature of the stimulus. And you can see number of
neurons stimulated here. We can actually drop
that even below 20, to just a handful of cells where
we can still detectably see both the correct behavior
and the population dynamics response. Looks like layer 5 cells
are a little more potent than layer 2, 3 cells
and that you can get down to fewer of them to
get a comparable level of behavioral response. So this, by itself, is
also quite interesting and leads to a lot of
interesting questions about how cortex is set up to
allow this sort of ignition, or critical dynamics,
to be present and to not cause any problems. Now this, of course,
was also interesting. Because we were seeing
naturalistic dynamics elicited by a few cells, this was
broad, and 3-dimensional, and covered most
of visual cortex. But of course, we would like
to know, even brain-wide, does properly
targeted optogenetics elicit naturalistic
brain-wide responses as well? And although we can't
see, in real time, the activity all across
the mammalian brain with optical tools, we
can get such measures with electrical tools. And this is work
led by Will Allen in the lab, who, as you
can tell, has many talents. And he led this
experiment where we used neuropixels probe,
which are probes which are very long-shank,
high-density electrical recording devices, placed
in different trajectories in different animals. In this experiment, there
was one neuropixels probe per animal. But by using a
temporally precise task, we're able to
combine the results across many different animals. With known trajectories,
we clear the brains. We see where the trajectory
of the neuropixels probe was, align that to the
Allen Brain Institute Atlas. And we can build up, in this
way, a brain-wide understanding of the populations of cells that
are active during behaviors. And as a first step,
we picked, probably, the simplest possible
behavior we could imagine, which is just an animal
licking for water when thirsty. And this is, by design,
the simplest possible task. Because we wanted to see,
across the whole brain, what would be the
representation of this task. And starting from the
simplest possible task was a good place to start. And in this go, no-go
task, the animal has learned that one
odor means that there will be water coming. Another odor means there
will not be water coming. There's an onset, an
offset of the odor, and then an onset of reward,
three vertical lines. And here's just the behavior. The animals learn to
lick for the go odor and not for the no-go odor. And this is over many trials,
the animal eventually becoming sated and no longer licking,
even for the go odor. First question is, what
happens to my computer? Let's see. There we go. First question is what
happens across the brain without optogenetics? What is the brain-wide
representation? And an interesting finding-- these are all different
brain regions, color coded. Here those three dotted lines
indicating the different phases of the task. Red means more active, and
blue means less active. You can see there's recruitment
of virtually the whole brain by this very simple task. In fact, more than
half of all the neurons that we recorded from were
statistically modulated by the operation of this task. That was an interesting
finding in itself, which has its own
implications, which we maybe can talk about later. But then the question was, what
happens if we optogenetically drive, in a properly
targeted way, a deep population
of thirst neurons? And here we targeted the same
pathway that we had identified using IC++, the engineered
chloride conducting channelrhodopsin, to implicate a
particular population of thirst eliciting neurons. And so targeting the subfornical
organ input to the MnPO thirst neurons, you can get the sort
of behavioral result, animal licking to the go
cue, becoming sated. And then when you drive
the thirst neurons, this tiny population
deep in the brain, you can restore this triggered
survival drive behavior, licking for water. And so then the
question is, what's happening across the
brain in this setting? And the answer, remarkably,
is that it's very similar to the natural state. All across the brain, tens
of thousands of neurons-- here's the natural
licking for water state. Here's the sated state. And here's this
optogenetically induced state resembling the natural state
to quite a remarkable degree. So this is good news, of
course, for everybody studying optogenetics, that if
you target things right, you elicit naturalistic
dynamics both locally and across the entire brain. And also, it raises a lot
of interesting questions analogous to the ones I
mentioned in visual cortex. What does it mean for the
controllability of the brain? How is it structured to allow
small populations of cells to elicit such broad responses? This clearly is a
very elegant design, one that probably
can go wrong, and is beautiful when it goes right. And I'll wrap up there. But I will show this
last slide, which, by chance, was the
first slide that Peter showed at the very beginning. I think it's a useful
slide to reflect on. Because all the
exciting advances that we've made in
understanding the brain, and mammalian behavior, and
behavior of many species and circuits across
biology, in many ways, is deeply rooted in botany
and in basic science of studying plants. And so it's a nice
story, I think, for us to keep in mind,
thinking about the value, and the importance, and the
need to support basic science. And I'll take a
moment at the end to thank all my amazingly
talented students and postdoctoral fellows. I mentioned many of
them along the way. The crystal structure work-- I mentioned the stuff that
came out very recently was led by Yun Kim, a
graduate student in my lab. I think I forgot to
mention him early on-- extremely talented. He might be coming here as well. We'll see. But many other very talented
students and postdocs along the way-- the work that he did
was also important for identification of
ChRmine and for-- the paper on eliciting the visual
responses in mice was led by Jim Marshall,
along with Tim Machado and Sean Kerin in the lab. And all my many collaborators
around the world over the years-- first and foremost,
Peter Hegemon, but many others as well-- and again, the amazing, talented
people who've worked with me back at Stanford, Ed, and
Feng, and many others, it's been a wonderful time, a
lot of astonishing progress. And it's been a pleasure
to share it with you. So thank you. [APPLAUSE] Wow. OK, thank you very, very much-- an incredible afternoon
of great science. I want to thank Bernardo
for his moderating this. I want to recognize, again, the
members of the Warren Alpert Foundation. I had forgotten to mention
early, our former dean, Joe Martin is here, also a member
of the foundation board. And finally, to
congratulate the winners on this spectacular
display of science-- thank you very much. And I look forward to seeing
you all here next year again. [APPLAUSE] [BACKGROUND CONVERSATION]