[MUSIC PLAYING] PAUL DAVIES: I'm
probably the only one in this room old
enough to remember that when you wanted
to use a computer, it meant a 15-minute
walk across town. In my case, from University
College London, where I was a student, to
somewhere near Senate House, where in this entire
building was housed, I think, it was an IBM 360 or
something like that. And you'd go with a pocket
full of punch cards, and hand them across a desk,
and sign all sorts of forms, and go back the next day
to see what had happened. Now, the astonishing
increase in processing power that has occurred
during my lifetime, encapsulated in the
famous Moore's law, is due almost entirely to
the properties of quantum mechanics, quantum systems. Whether it's
electrons or photons. And the architect
of quantum mechanics was one Ervin Schrodinger,
who put together what we would now understand
as the fundamentals of this subject
in the late 1920s. And at a stroke,
quantum mechanics explained the nature of matter
all the way from atomic nuclei right up to the
structure of stars. It's the most successful
scientific theory we have. So it's not just in
the computing industry that it's had an impact. It's really right
across technology. And more than that,
quantum mechanics provides us with
our deepest account of the nature of reality. It's not just the
nature of matter, but the nature of
reality as well. And it's a subject
that continues to intrigue and baffle
mainstream scientists. Schrodinger was a giant
of theoretical physics, and this was a very
difficult subject, and he is rightly given the
credit along with Heisenberg and Dirac for founding it. Now, fast forward about 10
years, and the scientific scene took a turn for the worse with
the rise of Nazism in Europe, and many people fled, some to
work on the Allied war effort. Schrodinger, who was
Austrian, also fled. But instead of joining
the Allied war effort, he settled in Dublin. Ireland was neutral
during World War II, and he made a home there with
his wife and his mistress, living under the same roof. Apparently, he still
had many affairs. It didn't prevent him. And he turned his
attention, I suppose, having sort of got out of the
mainstream, to whatever it was that interested him. And he decided that, having
cracked the problem of matter, maybe he could crack
the problem of life. And he delivered a series
of lectures in 1943, which became a very famous book
published in 1944 by Cambridge University Press
called "What is Life?", asking that basic question. And you might say, well, what
is so special about life? I think all of us
have that impression that living organisms
are in a class apart, that they are unlike any
other complex systems. They have rather
extraordinary properties, and indeed rather
baffling properties. And what Schrodinger
was interested to know was whether life,
living organisms, could be explained by physics. And then the question
was, could it be explained by known physics,
or did it need any new physics? Might there be some
sort of laws of life? And Schrodinger was
open minded about this. He said that one
must be prepared to find a new type of
physical law prevailing in it. So notice that it's not
just a new physical law, but a new kind of law. For many, many years,
I was skeptical that we would need new
physics to explain life. I just thought our difficulty
in understanding living systems was due to the fundamental
complexity of living organisms. But over the last
few years, I've become convinced that we
do need some new physics. And so we might think of biology
as the next great frontier of physics. I think there's new physics
lurking in biology which could have sweeping implications
across the whole of science. Now, a contemporary of
Schrodinger, Max Delbruck, I think, expressed very
nicely the puzzlement that we all feel when thinking
about living organisms. Let me just quote
what he said here. "The curiosity remains
to grasp more clearly how the same matter, which
in physics and chemistry displays orderly and
reproducible and relatively simple properties,
arranges itself in the most astounding
fashions as soon as it is drawn into the orbit
of living organisms. The closer one looks at
these performances of matter in living organisms, the more
impressive the show becomes. The meanest living cell
becomes a magic puzzle box, full of elaborate and
changing molecules." And to condense this message,
what Delbruck is saying is that when we're dealing
in the level of atoms, it's just known physics. But at the level of cells,
it seems like magic. And what is the
source of that magic? What I want to tell
you today, briefly, is that that wall of
secrecy or mystery that is surrounding the nature
of living systems, I think, there is a chink
in that, and we can begin to see where
the answer lies to explain the astonishing
properties of living things. Now, let me just enumerate
some of those properties and their implications. Organisms seem to have a
sort of inner motive power. They seem to be
self-propelled, something that goes right back to
Aristotle wondering about that. They operate as if they
have a project, or a goal, or a purpose. Obviously, human beings do,
but even a humble bacterium pokes around,
seemingly knowingly, looking for food or something. They seem to know
what they're doing. Unlike most systems, they
create order out of chaos. There is a natural tendency-- I'll talk about it in a
moment, the second law of thermodynamics-- for systems
to degenerate and become more and more disorderly. It's easier to break
it than make it. Anyone here with
teenage children will know exactly
what I mean when you inspect their bedrooms. Living systems are
self-organizing, and they evolve in
an open-ended way. You cannot really
predict in advance. If you got in a time machine,
went back three billion years, and there were just these
sort of boring bacteria, if you tried to predict the
future of the biosphere, there would be trees,
and bees, and cacti, and so on, you couldn't do it. It's an open-ended and seemingly
boundless possibility space. Living organisms, I
suppose, their most characteristic feature,
at least the one that intrigues me, they achieve
seemingly impossible states. That is, they achieve
states of matter that could not arise
in any other way from non-living systems. And I give some
examples later on. And this raises
a whole question, when I decided I would write a
book about what is life, about the nature of life, a
lot of my biology friends were a bit baffled by that. You know, what's "the problem? We study life." And of course, biologists
study what life does, not what life is. And the big issue
for a physicist is that all this wonderful
life, which once it gets going, keeps going, it all
works very well. It perpetuates itself. So once the system is up and
running, it's fine, but how did it get started
in the first place? And that's a really
important question, because everybody, I
think, wants to know, are we alone in the universe? And the answer hinges
on just how likely it is that life starts
up from non-life. What is it? What is the process that, if
you like, animates matter? What is the process that makes
normal matter living matter? We don't know, but
if this is something which is incredibly unlikely,
if the chemical pathway leading from a mishmash of chemicals
to the first living thing was very long and
convoluted, it may have happened only once in the
entire observable universe. In which case, we are it. There is no life anywhere else. Alternatively, if
this is something that is, this
process, this pathway is built into the fundamental
nature of the universe, then we could expect
that life will come wherever there are
opportunities, wherever they are earth-like planets. The universe will
be teeming with it. So this is a really
important question. Now, we're hampered in this
entire discussion about life, because we don't
have a life meter. This is what it might
look like, if you had one. What we would ideally like to
be able to do, particularly in searching for life elsewhere,
is to send a life meter off to Mars, or Titan, or wherever
you happen to think life may look, and it will
come back, hopefully with a reading showing
yes, there is life there. But we also feel
that the transition from non-life to life isn't
going to be an abrupt thing. It's not just some amazing,
gigantic chemical reaction that just sort of happened one day. There's going to
be this pathway. We don't know whether it's a lot
of jumps or just a slow rise. We don't know what it
was, because we have no idea what that pathway was. But wouldn't it be nice if
we could go to somewhere like Titan, sample the
atmosphere, which we know is full of organics, and this
life meter would say, well, nice try, Titan. You've been there four
and 1/2 billion years trying to cook up life,
didn't quite get there, but you did get 83.7%
of the way there. Come back with "that
was almost life." Would we know almost
life if we saw it? We know life when we
see it, but can we tell the difference between
something that is living, something that's almost
living, something that was living and no longer is. Now, this seems like a very
sort of academic exercise, and it's the type
of question we love to discuss in the Beyond Center
at Arizona State University. And we sit around, and we argue
about this, and define this. And we think we've got
our own approach, which I'm going to talk about
in a moment, based on the informational
properties of these systems. But there's a very
practical sense in which it would be nice
to know the answer of how to build a life meter. And we had a conference
a couple of years ago, and present in the meeting was
Chris McKay from NASA Ames. That's an astrobiology lab. And we were trying
to convince him that a true definition of life
was a software definition, not a hardware definition. A definition based on the
patterns and organization of information should be
substrate independent. What we would like is a
universal definition of life that would apply whatever
the chemical basis. And we've got some
ideas about that, and a little bit
abstract, and so on. Chris's response was,
well, this is all very exciting and
well and good, but I have to tell NASA headquarters
by April 24th what we're going to put on this spacecraft. And the spacecraft is going
to go to a moon of Saturn called in Enceladus. Some people say enchilada. And you see that from
the surface of Enceladus this plume of
material coming out. Now, Enceladus has an icy
crust but a liquid interior. And the Cassini spacecraft
measured organic molecules coming out in that gas. So as Chris says, it's
almost like Enceladus has put up a sign saying
"life, free samples." And so what they'd like to
do is fly a spacecraft-- there we go-- through that plume, and
put a life meter on board, and tell us, is there
life there or not? But what as soon as
you start thinking about exactly what do you
put on that spacecraft, you hit what I regard as the
fundamental problem of trying to understand what is life. If you ask a physicist
to describe life, then you're going to have-- or
a chemist, you're going to have a description in terms of
forces, and molecular shapes, and affinities, and binding
energies, and entropy, and energy, and all
that fun stuff that you learn in physics degrees. If you ask a biologist, you
get a very different narrative. So biologists will describe life
in terms of coded information, instructions, editing,
transcription, translation, signals, all of those
sorts of things, and to summarize that
in terms of information. So physicists talk in
terms of hardware stuff, biologists talk in terms
of software or information. And very much in the news at
the moment is gene editing. So it cells will
edit their own genes, but now we can do
it artificially. We can intercede. And so we have
two very different conceptual descriptions
of the same phenomenon. And you might think,
well, that's a problem. But one of my scientific
mentors, John Wheeler, was fond of saying that
revolutions in science owe more to the clash of ideas
than the steady accumulation of facts. And so when you see something
like this, twin narratives but completely different
conceptual basis, that's where progress in
science is likely to come. Now, if there are any
doubters in the room that think information is not
terribly relevant to biology, let me just run through
some obvious examples. The DNA in your bodies is packed
full of coded information. It's not just information. It's encoded. So the Book of Life is
written in this four letter alphabet of AGCT. Genes don't act in isolation. They tend to form networks,
sometimes of great complexity. They switch each
other on and off, and information swirls
around these networks. We've been studying that,
the patterns of information, the way it's stored, and how
the network architecture, its topology, the wiring
diagram, if you like, affects the behavior
of that information. And what is very
clear from people who study biological networks
is that if you apply things like scaling laws,
that they're very different from random networks. In other words, evolution
has honed the topology and the architecture
of these networks to have certain properties. And when we think
of life, there is a sort of reductionist
tendency to attribute biological functions
to specific genes or maybe collections of
genes, but we should really think in these network terms. And if we want to either
reconfigure life or cure some disease or
something, we really need to think at that network
level, the informational level. It doesn't stop there. Individual cells
signal each other. They transfer information. Even bacteria can form biofilms. They appear to be
very clever when they're acting collectively. But of course, once you get up
to the level of social insects, then collective decision
making via information transfer becomes a really
important subject. We have a group, a
big group at ASU, working on how
ants get together. I mean, that looks like
they're having a conference, but they do have a very
complex repertoire of behavior. But individually, each
ant is pretty stupid, but collectively,
they're pretty smart. So birds in flocks,
that's another example. And anyone who's
read today's papers will probably know that bees,
which are also very clever, can not only play football,
but apparently they can do arithmetic. So all of this has to do
with information exchange and collective decision making. Probably the most
dramatic example of information deployed
in the service of life is the development of the embryo
from a single-celled zygote. The exquisite choreography
that is involved in getting all the
right bits in the right places at the right time. And it's still not
fully understood, but Alan Turing, as
a matter of fact, had a great interest
in this, and developed a mathematical theory
about how morphogens, these would be at that
time unknown chemicals, might diffuse through a system
and sort of set up a three dimensional grid against
which, as we would now say in modern language, the
genes would express themselves in the appropriate way. What this is really
saying is that there is a dynamic chemical network
and a dynamic information network, and these networks
are coupled together, and there's complex
feedback loops, and the whole thing
operates to give this astonishing
developmental pathway. And then, last but
probably not least, is what's between our ears. This is perhaps the best
example of the power of information in biology. Once it leads to
behavior, an understanding at the level of
human beings, then it becomes truly spectacular. One last point about this,
just last week, my colleagues published a study in which they
looked at over 28,000 genomes. And this is a
planet-wide network of organized information
that they've mapped out here. I won't go into the
details, except to say that the biosphere was the
original worldwide web. And it's a web of
information exchange. Now, this idea
that life is really about information, and logic,
and information processing is certainly not my idea. It didn't start with me. There was a very
visionary paper in Nature by Paul Nurse, the former
president of the Royal Society, called "Life, Logic,
and Information." And again, I'll just
read what he has to say. "We need to describe the
molecular interactions and biochemical
transformations that take place in living
organisms, and then translate these descriptions
into the logic circuits that reveal how information
is managed. This analysis should
not be confined to the flow of information
from gene to protein, but should also be applied
to all functions operating in cells and organisms." And so what he's saying here
is that we should perhaps think of life as a little
bit like electronics, where you have these modules,
each of which has a function. These may just be
logical functions. There are many examples of
logic gates in bacteria, for example, in gene regulatory
circuits, that just operate and they can be wired together. And they just operate like
logic gates in computers. And so this enables
even simple organisms to carry out really very
sophisticated computations. Now, a lot of this was
presaged by the co-inventor of the computer, Von Neumann. So Turing came up
originally with the idea of a universal computer. So that is a machine that
could compute anything or could output anything
that was in principle computable, given long enough. And Von Neumann
drew a close analogy with what he called a
universal constructor. A universal constructor
would be a machine that could take parts, if it's
provided with enough parts, and it could assemble
them into anything that it was programmed
to do, including itself. It could make a copy of itself. So he was interested
in the concept of a self-reproducing machine. Is it possible,
he said, to build a machine that could construct
any physical system, including itself? And in carrying that
analysis through, he preceded the
logical architecture that we now know
life on Earth uses, that life is really an
information replication and management system. And in particular, he
foresaw the dual role of DNA. When I say dual role,
there are two things that can happen to DNA. So it sits there, this
famous double helix, and it's got instructions
in the form of genes. These can be read out. So it can be, as it were, a
database for the life project. It could be read out. And in that sense,
it's in active mode. It's sending that
information out. But when the cell comes
to divide, that stops. Actually, it doesn't stop. The two things can go on
together, but never mind. It flips to the other function. It then is just not information. It's not a database. It is then just a physical
object, a physical structure that gets replicated. So two quite separate
things happen to DNA. One is, it gets read out. The other is, it
gets replicated. Nobody quite knows exactly how
cells toggle between these two different functions of DNA. But in his notion of a
self-reproducing machine, Von Neumann, he spoke of something
called a supervisory unit, which would basically say,
"OK, make this, make this, make this. Then stop making anything. We'll just copy the
instruction set, and take that instruction set,
and put it in the project." And to have what life does,
true self-replication, DNA has to have that dual
hardware software function. Now I'm software. Now I'm hardware. It's a really important insight
into the fundamental logic that runs life as we know it. And Von Neumann was
absolutely right that this is the way that
life does organize itself. Well, now let me
move on, because I don't have too much longer. All of this raises the question
of what is information. I've been using that in a
very sort of colloquial sense, but I'd like to tell you
a little bit about how we need to think of information
not so much as something to talk about, but as a
physical variable that can enter into the laws of physics. Because I provocatively
said at the outset, I agree with
Schrodinger, that we need a new type of physical law. And I think it's a
physical law that will embed information
in a fundamental way in the laws of physics. These will be new
laws of physics, but it will be
physics, not magic. To do that, you have to
quantify information. I'm sure most of
the people here know that this was Claude Shannon,
who developed information theory in the late 1940s and
defined the binary digit, or bit. If you don't know this
story, one bit of information is what you get
if you toss a coin and then see whether it
came down heads or tails. But Shannon defined information
in terms of reduction in uncertainty. You've got a 50-50
chance, heads or tails. When you look and see,
that uncertainty goes away. And that's just one
bit of uncertainty. So that is all standard
stuff these days. But what I'm going to ask
is, can information, which seems to be a sort
of abstract concept, can it actually
have physical clout? There's an analogy
here with energy. So as you're all
aware, information is something that can be
instantiated in a wide variety of physical systems. If you want to copy a file from
your computer or a flash drive, you can do that. You keep the information. You can then send it down
optical fiber as photons. There's any number
of different ways you could store
that information. The information is conserved. It's independent
of the substrate. Just like energy, you can
convert electrical energy into mechanical energy,
or gravitational energy, or chemical energy. The energy stays the same,
but its manifestation or instantiation can be passed
from one thing to another. So just as energy has a sort
of independent existence but is always tied to
matter, so information. we talk about it
as if it's a thing with an independent
existence, but it is always tied to matter. So to fully understand
how information operates in living systems,
we have to understand how information
couples to matter, not just as a way of speaking,
but in a law-like manner. And it turns out that the
answer, the beginnings of an answer, were already
there in the 19th century with the work of James Clerk
Maxwell, and Maxwell's demon. Now, let me just
explain this concept. So Maxwell, who unified
electricity and magnetism, also made seminal contributions
to the theory of heat. And in a letter to
a friend, I mean, this was what physicists
call a thought experiment, he envisaged a tiny being,
soon to be called a demon, who could perceive individual
molecules in their paths, and then sort them into
fast and slow categories. So what you see there
is a shutter mechanism, and the wily demon
will open the shutter to let fast moving particles
go from right to left and slow moving particles go in
the other direction. And Maxwell argued
that that could be done without any
expenditure of energy. In principle, it
was just a matter of letting the
molecules move through. And if you do that,
because molecular speed is a measure of
temperature, you end up with the left side
hot on the right side, and then an engineer
could build you a heat engine that could
do useful work to run off that temperature gradient. So this was a device,
Maxwell's demon, of converting information
about the molecules into work, heat into work
without any further ado. And that's in flagrant
violation of the Second Law of Thermodynamics, the
most fundamental law of the universe. And that's the law that
we think of when we say heat goes from hot to cold. For example, if you
put a snow person, I guess we should call
it, next to a fire, the snow person melts
and the fire loses heat. It doesn't go the other way. You don't find that there's
more and more snow on the right and the flames get
hotter and hotter. So that's very familiar. Nobody really would contest
that, except some of you might be thinking, well, hold
on, what about my refrigerator? Doesn't that take heat
from the cold interior and put it into
the warm kitchen? And the answer is
yes, and you pay an electricity bill for that. In other words, given
energy, yes, you can make heat flow
uphill, or backwards, or reverse the arrow of time. However you want to think of it. But Maxwell's demon, it's not
plugged into the national grid. Maxwell's demon
does it for nothing. But not quite for nothing,
because Maxwell's demon is using information to gain
a thermodynamic advantage. And that says that
information is a type of fuel. And sure enough, it
is a type of fuel. And in the last
three or four years, there have been a flood
of papers and experiments making what are called
information engines. These are engines that run
on information power alone. My favorite one is in Finland. This is an information-powered
refrigerator. Yeah, you can actually do this. It's got an efficiency
of 75% of converting bits into work, or into energy. And it's not really going to
spark a kitchen revolution in the near future, because
this is nanotechnology. This is on a nanoscale. But nevertheless, it establishes
the important principle that Maxwell was right. So this is a Maxwell demon. There's one just a couple
of months ago reported in Korea that has more than
98% conversion efficiency of information into work. But it was a couple
of giants of the field of the theory of
computation working at IBM, Rolf Landauer,
and then Charles Bennett, who really put all this
together about 20 years ago. They were interested in
the fundamental limits of computation. What I often say to people
is that this machine here is a laptop, but you know, I
very rarely put it on my lap. And I think these days,
people tend to not do that. But if you do put it on
your lap, it gets hot. And that heat is wasted. In fact, I read somewhere
that the entire heat output from the world's
computing industry is more than the power
requirements of Denmark. And increasingly, a lot of
that is going to Bitcoin. Bitcoin is costing
staggering amounts of energy. Now, you know, why do you
expend energy like that? And so what Landauer
was interested in is, how do the laws of computing
and the laws of thermodynamics interconnect? And they do, because every time
a bit is flipped or erased, more to the point, you
generate a little bit of heat. And that could be
reduced enormously. And I just ask you to think that
your brain is like a mega watt supercomputer, but it runs on
the equivalent of a light bulb, a small light bulb. So it's possible to greatly
reduce the amount of heat which is expended. Landauer, and later
Charles Bennett, were interested could
it be reduced to zero? Could you have genuinely
reversible computation? And all that's in the book. I won't get into it here,
Don't have enough time. But what they
identified, at least what Charles Bennett
identified, is that you're not getting
something for nothing here. Maxwell's demon looks like it's
some sort of perpetual motion machine. It's really not, because
you get a little bit. You can always grab a little
bit of thermodynamic advantage. But if you wanted
something for your kitchen, you'd have to go on doing it
again, and again, and again. You'd have to have a cycle. And in order to operate a
Maxwell demon in a cycle, the poor thing gets its brain
clogged up with information, because remember, it's got
to observe, store, act, and then that's in its register. And that register will
fill up, and fill up. And so at some stage, it
has to be cleared out, brainwashed, reset,
and started again. And that act of
resetting the register generates as much entropy
as you have gained. So overall, in the universe,
the Second Law is still obeyed. But nevertheless, at the
level of nanotechnology, you can gain an advantage. And here, as a matter
of interest for the more mathematically inclined, is the
Second Law of Thermodynamics that explicitly
includes information. So if you put an information
term in, measured in bits, put an information term into
that equation, the books balance. Now, you might be
thinking, well, this is all well and good
for nanotechnologies. What's it got to do with life? And the answer is, well, life
is all about nanotechnology, that we're full of
molecular machines. Many of them are Maxwell
demons, chattering away, doing the business of life. There are many examples. You can read about
them in the book, but I'm just going
to give you one to give you the flavor of this,
because I need to wind this up in a moment. What's going on in
your heads, as I hope you're paying attention,
is that your neurons are signaling each other. These signals are a little
bit like electrical wires, the axons in your brain. But it's not like a flow of
electrons through the brain. What happens is that
there's like a wave of polarization,
electric polarization across the membrane of the axon. And there are these holes in
the membrane that are gated just like Maxwell's daemon. They can be opened and
closed, and the brain does open and close them. It gets information from
the neighboring pattern, electrical pattern,
makes decision to open and close them, and
lets through sodium ions in one direction and potassium
ions in the other direction with great fidelity and
almost no energy expenditure. Which is why all this can
happen with just a light bulb. And so that's one
great example of life using a Maxwell
demon type of device in this case in our heads. I want wrap this up by talking
about how did all this stuff happen in the first place? Well, Darwin famously gave
us a wonderful account of how life had evolved over
billions of years on earth, but he refused to be drawn on
the origin of life, the problem life's origin. Mere rubbish, he said. Thinking of this,
one might as well think of the origin of matter. When I was a
student, the feeling was very much
summed up by Crick, "life seems almost
a miracle," he said. So many other conditions
for it to get going. This didn't stop some people
like Stanley Miller attempting to cook up life in the
lab by, in this case, putting some common gases
and sparking electricity through them. This whole probiotic
synthesis enterprise continues to this day. They've got a bit further,
but not very much. And I think there's an
absolutely fundamental reason why we're not going to
solve the secret of life by mixing stuff up
in a chemistry lab. And the answer I've
stated all along is that life's
distinguishing feature is the way it organizes
and manages information. And no greater example can
I give than the genetic code itself. The information on your
DNA is coded information. It's got to be decoded
before it means anything, before it has biological
functionality. And of course, what we'd like
to know is how did this come to exist in the first place? From some sort of molecular
milieu and random forces, how did a genetic code arise? How can molecules write code? It's very deeply baffling. I think we're a long way
from understanding that. But to truly finish,
I want to tell you that it goes much beyond that. And so I'm going to finish on
a sort of lighthearted note, but it's actually very profound. Which is that information in
biology goes beyond the gene. There's a whole field
called epigenetics which recognizes that. But the particular epigeneticist
I like best, Mike Levin at Tufts University. He's got the Allen
Discovery Center there, where he likes to work with
these worms called planaria. Teachers love these, because
they have a head and a tail, even have a little
brain apparently. You see the eyes at one
end, tail at the other. The great thing is, if
you chop them in half, then the tail makes a head
and the head makes a tail. And so children
love that, you see. There we go. You can get two of them. And you can go on doing this. You get more, and
more, and more of them. And I asked Mike, do
they ever have sex? And he said, well, they do, but
they prefer to be chopped up. Now, what Mike can do is-- what is he's discovered is
that when you chop them, chop through them, the
electric polarization across the membrane,
that's what I was talking about with the axons-- all cells have that. They all have a potential
difference across the membrane. They pump protons
out to keep that-- that gets altered
around the wound. So there's an electrical
pattern there. Now, he can muck around
with that pattern with drugs that sort of alter
the ionic concentration and so on. And when he fiddles
around like that, he can end up making two-headed
and two-tailed tailed worms. And again, I ask him, are the
two-tailed one's really stupid? And he said it doesn't seem
to make a lot of difference where they had a head or not. But, and this is the
mind blowing thing, if he chops up a two
headed worm into halves, you might think that would
restore the original phenotype. But no, it makes two
two-headed worms, and the same thing with
the two two-tailed worms. The point is, they've
got identical DNA. So a visitor from
Mars might think these are different species,
but they've got identical DNA. So this is epigenetic
inheritance, it's often called. So this is a way
that the information about the phenotype,
about the physical form, is propagated from one to
the next, not in the genes. And we don't know
how it's propagated. We're working with him
to try to figure out where is that information being
stored and processed and passed on. It raises this
entire question about electrically encoded software
and bioelectric memory, and where does it come from. A postscript to this is
that he took a lot of-- and his colleagues, he has
a large research group, took a lot of these worms,
chopped the middles out and sent all these middles
off to the space station. There it is. And when the middles came
back, about 15% of them had two heads, you see. So they basically lost
the head down on Earth, but came back with two. And this was much publicized
in the press a couple of years ago. Here in Britain,
it was described as two-headed flatworms stun
scientists, boffins baffled. Mike Levin has no
idea what a boffin is. I had to explain all this. But whenever boffins
get mentioned, they're always baffled. So this takes us beyond the
gene to epigenetic control. Mechanical, electrical,
and gravitational effects can all determine what genes
are switched on and off, and therefore
informational patterns. And that leads us
to questions like, is there an electrical
code to go alongside it? Can we use this
electrical leverage to cure things like
cancer, or birth defects, or repair tissues? We came into this-- or
I came into this lecture with quantum mechanics. And I would like to leave
it with quantum mechanics, because I think that the
true answer to finding how information couples to
matter in biological systems will probably come
at the same interface where we're into large molecules
that have quantum effects. But some people suspect
that these quantum effects will become modified when
they're sufficiently large. And there's an entire field
of quantum biology struggling to get out, where people are
conjecturing that if life is so clever at
manipulating bits, can it manipulate
qubits as well? And of course,
quantum computing, Google is invested in that. This is a billion dollar
research, multibillion dollar research project
around the world, to build a quantum computer. And somehow I feel that
if quantum computing is achievable, which
surprised me a great deal, if that was the first
human technology that had never been found in nature. Everything else we've invented,
nature got there first. Is quantum computing
to be the exception? I somehow don't think
so, and I somehow think that quantum biology
and the convergence of nanotechnology, chemistry,
physics, information theory, in that realm, which
still lacks a name, that's going to be the next
great frontier of physics. And that's my last
word on the subject. Thank you. [APPLAUSE] AUDIENCE: You were talking
about information as fuel. Does that imply that
in Maxwell's demon information is being destroyed? PAUL DAVIES: Well, I did
explain that step but perhaps rather quickly, that when
the register is cleared to repeat the
cycle, that's where the information is destroyed
and the entropy of the system rises. So it's limited to basically
the storage capacity of the demon's register. And so this entire field is
still not absolutely completely worked out to
everybody's satisfaction. People say, well,
supposing you had an infinite supply of
demons, or supposing you stored that information
not in any energy landscape but some
other physical variable? Might there be some way
of getting around this? It's all actively
under investigation, but the key point is
that the information is used to gain a thermodynamic
advantage to convert heat into work, but in
a small amount, and that if you go on
doing that cumulatively, sooner or later,
you have to erase the information in
the register, and then you get back the entropy, you
get entropy disadvantage that compensates you for the
gain that you've had. So it's not a free lunch,
but if the name of the game is to play the margins of
thermodynamics on a nano scale, then that's certainly
what life is doing. But I didn't have time. Life is doing much
more than just playing a thermodynamic game. It's also doing this
complex organization, encoded information
management, and so on. And all of that has to come
in any true understanding of the nature of life. And so the game of
chess I skipped over would have been, if I'd have had
time and attempt to explain how I think we can
capture that concept of functional or meaningful
information in a context, in a new type of physical law,
which was what Schrodinger was suggesting. It would be a sort of top
down law, in which the-- just to give you the words, in
which the dynamical update rules, if you like the
evolution of the system, will be a function of
the state of the system, quite unlike any
law of physics we've had so far, where
the laws are fixed, the states change with time. We're advocating that the laws
are a function of the state, and that gives whole new
pathways to complexity. And we've done some computer
models to bolster that claim. So that's why I think the new
type of physical law comes in, but this is still
a work in progress. AUDIENCE: Do you think physics
can explain consciousness? PAUL DAVIES: Ah, well,
there's a whole chapter of the book on consciousness. Thank you for mentioning that. I, for most of my life,
have been a cosmologist. And so I've been interested
in the origin of the universe. And then, as is
obvious, more recently, I've become interested
in the origin of life. And I like to say there are
really three great origin problems-- the universe, the
life, and then consciousness. I think we've cracked the
problem, more or less, of the origin of the universe. That's the easy one. The origin of life
is very tough, but I think we're on the case. The origin of
consciousness, we don't even know how to frame that. However, that hasn't stopped
me writing a chapter about it. This is a chapter in
which I, more or less, outline what all the problems
are rather than come up with the solutions. But there is one
thing that I warm to, and it's part of this
power of information. There's a whole-- I should say not just
biological networks, network of researchers,
who like these things. And one of these
is Giulio Tunoni. He's a neuroscientist, and
he's defined something called "integrated information." And so it's often
said that the whole is greater than the sum of
its parts, and never more so than for consciousness. And this is a way of attempting
to mathematize that feeling that the collective
system has greater power than the components. And he can demonstrate that
for certain types of networks with certain types
of feedback loops. So in principle, if you
believe his definition, which very few people do,
but it's a heroic try, you'll be able to say, well,
is a thermostat conscious? That's a famous conundrum. You know, is a radio conscious? Is an ant conscious? And he could give an answer. He actually has a way of
defining a consciousness meter. Not just a life meter,
a consciousness meter. And it all depends
on the architecture, the informational
architecture in the network. So that's something
that I do describe. And I should probably
say at this stage, having mentioned that there
is a network of collaborators, almost everything that I
write in my book originates with my young
collaborator, Sarah Walker, and our magnificent
group of students and postdocs. And Sarah is so
gracious, she said, but everything I tell you I got
from you in the first place. But she is very smart, and
so a lot of these ideas are actually hers. AUDIENCE: In the
last few years, I've been hearing about the free
energy principle, which tries to define life as being
something that can mirror its environment, and coming
up with a definition of life as a system that's capable
of mirroring its environment. And I was wondering
if you'd heard of this and what your
thoughts are on it. PAUL DAVIES: So is this
the work of Jeremy England you're referring to? AUDIENCE: Yes, I am. PAUL DAVIES: Yes, and his stuff
does get a bit of a mention here. Not as big a mention
as in Dan Brown's book, but he certainly gets a mention. And it's connected. I mean, it's this
whole point about-- these pathways are often
very complex in biology, of going from here to
here, and how much-- So I should just get
technical for a moment. The word "free energy" has
a very specific meaning to a physicist. It doesn't mean it's
energy for free. I mean, "it's available
energy" is probably a better way of looking at it. And the F that appeared
in that equation I showed that had
the [? I ?] term, that is that the free energy. And information has
to come into that. And so it is very much
connected with the same thing. There's something called
[? Yashinski's ?] inequality that goes into this. And in the book, you will see
[? Yashinski's ?] contraction that can turn
information into work, and it looks like
a child's mobile. It's a thought experiment. But as I say, these things--
not that particular one, but these sorts of things
have been built now. This is part of technology. AUDIENCE: So I just have
a clarificatory question regarding the Maxwell demon. So I think I don't
understand why the demon's information
content, as it does its job, is going up. You know, I would have imagined
that the demon just decides on a certain speed threshold-- PAUL DAVIES: Yes. AUDIENCE: --In the
direction, then lets every molecule approaching
it at a faster speed than that through. So it just has one
bit of information that it is maintaining. PAUL DAVIES: Ah,
no, but the demon has to be able to
observe and assess the speeds of those molecules. And then is this one
coming fast enough? And then yes, open the shutter. You see, so the person who
analyzed this in great detail in the 1920s, Leo Szilard,
he simplified everything and had a single molecule in
a box with a movable screen. It's called Szilard's Engine. It's much discussed in my book. And the demon would then ask
the question "is the molecule on the right or on the left?" And then insert the screen
and extract the work. But then in order to
repeat that cycle, it would have to eliminate the
information from its register or accumulate it. And then if it accumulates,
if it's a finite register, then eventually all this
would grind to a halt. But I mean, you're right
to say, well, you know, what about this,
what about that? Because this is still a
subject which is picked over. So Charles Bennett,
who in my view settled it to my satisfaction,
has his detractors that say, but what about if you did
this, and supposing that? But I think overall, the key
message is that it's not the-- Szilard thought it was
the act of acquisition of the information, that
in order for the demon to know what the
molecules were doing would have to expend energy. And that energy would
raise the entropy, and that's where
the books balance. It wasn't at that stage. It's at the clearing
of the register stage. So it really is-- this is where thermodynamics
and computing really do mesh together. Because it's not the
physics of sensing. It's the physics of computation
that saves the second law. So there's very,
very deep connection between the world of computing
and life, which I think is sort of increasingly
obvious to everybody. Here we see it boiled
down to it's essentials in Szilard's Engine and
Bennett's resolution of the demon paradox. AUDIENCE: Maybe I will
have another crack at it another direction on the
topic of consciousness. If we can kind of
unify everything into physics about
biology, the information, what does it say about free
will of the individual? PAUL DAVIES: Right, you've
raised a vast subject, of course. And I could stand here for
a very long time talking about free will. I can't do that. I have no choice. I have to go off and
do another event. But just very briefly, and
of course I do allude to it in the book, let me
just cut to the chase and tell you my own
position on free will. Free will, I think, is
a feeling that we have. More to the point, and
this comes out in the book, that free will is closely
connected to our sense that time is flowing. Now, a physicist will
tell you time can't flow. That's an illusion, but it's
a necessary illusion for us to live our lives. We live our lives
as if time flows. It doesn't flow. There are simply intervals
between events and states of your brain. So I think that the
illusion of the flow of time is because we convince
ourselves that we have-- I have to use the word, we
have selves, conserved selves, that I'm the same self as
I had when I was a child. In other words, that the
self stays unchanged, and the world changes around us. That gives us an impression
of the flow of time. I think that's back to front. I think the world
doesn't change. There are simply
successive states. Time isn't flowing, but
at each successive moment, you're a slightly different
self from what you were before. So a lot of mutual information
between today's self and yesterday's self,
but it is different. You're a different person today
from what you were yesterday. So I think our sense of
freedom has to do with getting that back to front. Anyway, read the book. AUDIENCE: On the last slide on
quantum computation and quantum biology and the link, are any
proposals for a physical system that, inside biology, could
do useful quantum computation in the sense that
how will a qubit or coherence of a qubit
or a a bunch of qubits be maintained in a-- temperature, moisture, etc. PAUL DAVIES: Right No,
no, that's all right. So first of all, that's the
biggest subject in quantum biology generally. Can biology deploy
any quantum effects involving things
like entanglement and tunneling and so on? The answer to that is,
almost certainly yes. And I give four examples in that
particular chapter in the book, while having to do
with in photosynthesis the transfer of energy
from the light harvesting center to the chemical
reaction center. Now that has to do
with bird navigation, extraordinary thing,
olfactory response of flies, and some other things in there. The one that really strikes me,
because my colleagues at ASU are doing experiments,
is the way that electron tunneling
through organic molecules can be dramatically different
according to the sequence, say, if it's a protein,
the nucleotide sequence. A colleague of mine said,
change one amino acid, and it's like an on/off switch. So it looks like
biology has fine tuned, by natural selection, fine
tuned these organically functional molecules to have
special quantum tunneling properties. So I think there's
very little doubt that quantum mechanics is
playing a role in biology. What's not clear is
whether this is just a few quirky little
things that life has stumbled across
and taken advantage of, or whether it's the tip
of a quantum iceberg. But your question
specifically was, can there be qubit processing? There was a claim by
Apoorva Patel from India that that indeed
was taking place, that it was actually Grover's
Algorithm being implemented in the genetic code. He's backed away from that
claim in recent years. So it was a sort of nice try. It gained a lot of attention. I agree with you. In that warm, wet
environment, it's very hard to see how decoherence
wouldn't kill this stone dead. But that would also be true
of some of the other quantum effects. And again, there's
a lot of work being done to see how the
quantum processes could be screened, could screen
out the environmental noise, and so on. And so it could be
that biology has got a trick or two
to teach people working in quantum computing. There's certainly
some people who do work in quantum computing
who are aware of this. But this is all very
much the cutting edge. I think we'll find out
in the coming years. SPEAKER 1: Thank you very
much for coming, Paul. That was thoroughly,
thoroughly enjoyable. Yeah, thank you very
much, Paul Davies. PAUL DAVIES: OK, my pleasure. [APPLAUSE]