- Every star in the sky
probably has planets and life is probably
emerging on these planets. But I think the combinatorial
space associated with these planets is so different. Our causal cones are never
gonna overlap, or not easily. And this is the thing that
makes me sad about alien life, it's why we have to create alien life in the lab as quickly as possible, because I don't know
if we are gonna be able to build architectures that will intersect with alien intelligence and architectures. - And intersect, you don't
mean in time or space time? - [Lee] Time and the
ability to communicate. - So the ability to communicate? - Yeah. My biggest fear in a way is that life is everywhere but we've become infinitely more lonely because of our scaffolding
in that combinatorial space. - The following is a
conversation with Lee Cronin. His third time in this podcast. He is a chemist from the
University of Glasgow who is one of the most
fascinating, brilliant, and fun-to-talk-to scientists I've ever had the pleasure
of getting to know. This is a "Lex Fridman podcast". To support it, please check out our
sponsors in the description. And now, dear friends, here's Lee Cronin. So your big assembly theory
paper was published in "Nature". - Yeah.
- Congratulations. - [Lee] Thanks.
- It created, I think it's fair to say, a lot of controversy but also a lot of interesting discussion. So maybe I can try to
summarize assembly theory and you tell me if I'm wrong. - [Lee] Go for it. - So assembly theory says that if we look at any object in the universe, any object, that we can quantify how complex it is by trying to find the number
of steps it took to create it. And also, we can determine if it was built by a process akin to evolution by looking at how many copies
of the object there are. - [Lee] Yeah, that's spot on. - Spot on?
- Spot on. - I was not expecting that. Okay, so let's go through definitions. So there's a central equation
I'd love to talk about, but definition-wise, what is an object? - (chuckles) Yeah, an object. So if I'm gonna try to be
as meticulous as possible, objects need to be finite and they need to be
decomposable into subunits. All human-made artifacts are objects. Is a planet an object? Probably yes, if you scale out. So an object is finite and
countable and decomposable, I suppose mathematically. But, yeah, I still wake up some days and think to myself what is an object? Because it's a nontrivial question. - "Persists over time." I'm quoting from the paper here. "An object is finite, is distinguishable." So that's a weird adjective. "Distinguishable." (chuckles) - We've had so many people help offering to rewrite the paper after it came out. Yeah, you wouldn't believe,
it's so funny. (laughing) - "Persists over time
and is breakable such that the set of
constraints to construct it from elementary building
blocks is quantifiable. Such that the set of
constraints to construct it from elementary building
blocks is quantifiable." - The history is in the objects. It's kind of cool, right? - Okay, so what defines the object is it's history or memory, whichever is the sexier word. - I'm happy with both,
depending on the day. (laughing) - Okay. So, "the set of steps it
took to create the object." So there's a sense in which every object in the universe has a history, and that is part of the thing that is used to describe its complexity. How complicated it is.
- Yeah. Okay, what is an assembly index? - So the assembly index, if you're to take the object apart and be super lazy about it or minimal, 'cause, you know, it's like you've got a really short-term memory,. So what you do is you lay all the parts on the path and you
find the minimum number of steps you take on the path
to add the parts together to reproduce the object, and that minimum number
is the assembly index. There's a minimum bound. And it was always my
intuition the minimum bound and assembly theory was really important, and only worked out why a few weeks ago, which is kind of funny, because I was just like,
"No this is sacrosanct. I dunno why. It'll come to me one day." And then when I was pushed
by a bunch of mathematicians, we came up with the correct
physical explanation, which I can get to, but it's the minimum, and it's really important, is the minimum. And the reason I knew
the minimum was right is 'cause we could measure it. So almost before this paper came out, we published papers,
explain how you can measure the assembly index of molecules. - Okay, so that's not so
trivial to figure out. So when you look at an object, we could say a molecule, we could say object more generally. To figure out the minimum number of steps it takes to create that object, that doesn't seem like
a trivial thing to do. - So with molecules, it's not trivial, but it is possible
because what you can do. And because I'm a chemist, so I'm kind of like I
see the lens of the world through just chemistry. I break the molecule apart, break bonds. And if you take a molecule
and you break it all apart you have a bunch of atoms, and then you say okay I'm
going to then take the atoms and form bonds and go up the chain of events to make the molecule. And that's what made me realize, take a toy example,
literally a toy example, take a Lego object which is
broken up of Lego blocks. So you could do exactly the same thing. In this case, the Lego blocks
are naturally the smallest, they're the atoms in the actual
composite Lego architecture. But then, if you maybe take, you know, a couple of blocks and put
them together in a certain way, maybe they're offset in some way. That offset is on the memory, you can use that offset again
with only a penalty of one, and you can then make a square,
triangle, and keep going. And you remember those
motifs on the chain. So you can then leap from the start, with all the Lego blocks or atoms just laid out in front of you and say right, I'll take you, you, you, connect, and do the least amount of work. So it's really like the
smallest steps you can take on the graft to make the object. And so for molecules, it
came relatively intuitively, and then we started to
apply it to language. We've even started to apply
it to mathematical theorems. But I'm so well outta my depth. But it looks like you can
take a minimum set of axioms and then start to build up kind of mathematical architectures
in the same way, and then the shortest path to get there is something interesting
that I don't yet understand. - So what's the computational complexity of figuring out the shortest
path with molecules, with language, with mathematical theorems? It seems like once you have the fully constructed Lego castle or whatever your favorite Lego world is, figuring out how to get there from the basic building blocks. Is that hard problem? - It's a hard problem, but actually if you look at it. So the best way to look at it. For this, take a molecule. So if the molecule has 13 bonds, first of all take 13
copies of the molecule and just cut all the
bonds, so cut 12 bonds, and then you just put them in order. And then, that's how it works. And you keep looking
for symmetry or copies, so you can then shorten it as you go down, and that becomes
combinatorially quite hard. For some natural product molecules, it becomes very hard. It's not impossible, but we're looking at the
bounds on that at the moment. But as the object gets bigger, it becomes really hard. And the that's bad news. But the good news is there are shortcuts. And we might even be able
to physically measure the complexity without
computationally calculating it, which is kind of insane. - [Lex] Wait, wait, how would you do that? - Well, in the case of a molecule, so if you shine light on a molecule, let's take it infrared, the molecule has each of the bonds absorbs the infrared differently in with what we call the fingerprint region. And because it's quantized as well, you have all these discreet
kind of absorbances. And my intuition, after we realized we could
cut molecules up in mass spec, that was the first go at this. We did it with using infrared. And the infrared gave us an even better correlation assembly index. And we used another technique as well in addition to infrared, called NMR, nuclear magnetic resonance, which tells you about the number of different magnetic
environments in a molecule, and that also worked out. So we have three techniques which each of them independently gives us the same, or tending towards the same, assembly index from a molecule that we can calculate mathematically. - [Lex] Okay, so these are all methods of mass spectrometry, mass spec. You scan a molecule, it gives you data in the form of a mass spectrum. And you're saying that the data correlates to the assembly index? - [Lee] Yeah. - How generalizable is that shortcut, first of all to chemistry, and second of all beyond that? 'Cause that seems like a nice hack, and you're extremely knowledgeable about various aspects of chemistry, so you can say, okay, it kinda correlates. But, you know, the whole idea
behind assembly theory paper and perhaps why it's so controversial is that it reaches bigger, it reaches for the bigger general theory of objects in the universe. - Yeah, I'd say so. I'd agree. So, I've started assembly theory of emoticons with my
lab, believe it or not. So we take emojis, pixelate them, and work out the assembly
index of the emoji, and then work out how
many emojis you can make on the path of the emoji. So there's the uber emoji from which all other emojis emerge, so you can then take a photograph, and by looking at the shortest path, by reproducing the pixels
to make the image you want, you can measure that. So then you start to be
able to take spatial data. Now there's some problems there. What is then the definition of the object? How many pixels? How do you break it down? And so we're just learning
all this right now. - How do you begin to
compute the assembly index of a graphical like a set
of pixels on a 2D plane that form a thing? - So you would first of all
determine the resolution. What is your XY, and what are the number
on the X and Y plane, and then look at the surface area, and then you take all your emojis and make sure they're all
looked at the same resolution, and then we would basically then do exactly the same thing we
would do for cutting the bonds. You'd cut bits out of the emoji. You'd have a bag of pixels, and you would then add
those pixels together to make the overall emoji. - Wait, wait a minute. But, like, first of all, I mean, this is at the core
sort of machine learning and computer vision. Not every pixel's that important. And there's like macro features, there's micro features and
all that kind of stuff. - [Lee] Exactly. - Like, you know, the eyes
appear in a lot of them, the smile appears in a lot of them. - So in the same way in chemistry we assume the bond is fundamental. What we're doing here is
we assume the resolution at the scale at which
we do it is fundamental, and we are just working that out. And you are right, that
will change, right? Because as you take your lens out a bit, it will change dramatically. But it's just a new way of looking at not just compression, what we do right now in
computer science and data, one big kind of misunderstanding as assembly theory is telling you about how compressed the object is. That's not right. It's a how much information is required on a chain of events. 'Cause the nice thing is, when you do compression
in computer science, we're wandering a bit here, but it's kind of worth wandering I think. And you assume you have
instantaneous access to all the information in the memory. In assembly theory, you say, no, you don't get access to that memory until you've done the work. And then when you've done
access to that memory, you can have access but
not to the next one. And this is how in assembly theory we talk about the four universes, the assembly universe,
the assembly possible, and the assembly contingent, and then the assembly observed. And they're all scales in
this combinatorial universe. - Yeah. Can you explain each one of them? - Yep, so the assembly
universe is like anything goes, It's just combinatorial kind
of explosion in everything. - [Lex] So that's the biggest one? - [Lee] That's the
biggest one. It's massive. - Assembly universe, assembly possible, assembly contingent, assembly observed, and the y-axis is assembly steps in time. And, you know, and the X-axis is as the thing expands through time, more and more unique objects appear. - So assembly universe, everything goes. Assembly possible, laws
of physics come in. In this case, in chemistry bonds. So that means- - [Lex] Those are extra
constraints, I guess. - Yes, and they're the only constraints. They're the constraints at the base. So the way to look at is
you've got all your atoms, they're quantized and you
can just bang them together. So in the way in computer science speak, I suppose the assembly universe is just like no laws of physics, things can fly through mountains, beyond the speed of light. In the assembly possible, you have to apply the laws of physics, but you can get access to all
the motifs instantaneously with no effort. So that means you could make anything. Then the assembly contingent says, no, you can't have access to
the highly assembled object in the future until you've
done the work in the past on the causal chain, and that's really the
really interesting shift where you go from assembly
possible to assembly contingent. That is really the key
thing in assembly theory that says you cannot just
have instantaneous access to all those memories. You have to have done the work somehow, the universe has to have somehow built a system that allows
you to select that path rather than other paths. And then the final thing, the assembly observed
is basically us saying, oh, these are the things we actually see. We can go backwards now and understand that they have been created
by this causal process. - But wait a minute. So when you say the
universe has to construct the system that does the work, is that like the environment that allows for, like, selection? - Yeah.
- So that's the thing that does the selection? - You could think about in terms of a Von Neumann constructor
versus a selection of ribosome, Tesla as a plant,
assembling Teslas, you know? The difference between the
assembly universe in Tesla land and the Tesla factory is everyone says, "No, Teslas are just easy,
they just spring out. You know how to make them all." At Tesla factory, you have
to put things in sequence and out comes a Tesla. - So you're talking about the factory? - Yes. This is this is really nice. Super important point is that when I talk about the universe having a memory or there's some magic, it's not that. It's that tells you that there must be a process encoded somewhere in physical reality, be it a cell, a Tesla factory, or something else that is making that object. I'm not saying there's
some kind of woo-woo memory in the universe, you know,
morphic resonance or something, I'm saying that there is
an actual causal process that is being directed,
constrained in some way. So it's not kind of
just making everything. - Yeah, but, Lee, what's the factory that made the factory? So first of all, you
assume the laws of physics has just sprung to
existence at the beginning. Those are constraints. But what makes the factory the environment that does the selection? - Well, it's the first
interesting question that I want to answer out of four. I think the factory
emerges in the interplay between the environment and the objects that are being built. I'll have a go at explaining
to you the shortest path. So why is the shortest path important? I'm gonna have to go with
chemistry for a moment then abstract it. So imagine you've got
an given environment, that you have a budget of atoms you're just flinging together. And the objective of those atoms that are being flung together, in say molecule A, they decompose. So molecules decompose over time. So the molecules in this environment, in this magic environment, have to not die, but they do die. They have a half-life. So the only way the
molecules can get through that environment out the other side. Let's pretend the environment is a box, you can go in and out without dying. And there's just an infinite
supply of atoms coming, well, a large supply. The molecule gets built, but the molecule is able to
template itself being built and survives in the environment, will basically reign supreme. Now let's say that that
molecule takes 10 steps and it's using a finite
set of atoms, right? Now let's say another molecule, smart-ass molecule
we'll call it, comes in, and can survive in that
environment and can copy itself, but it only needs five steps. The molecule that only needs five steps, 'cause both molecules are being destroyed but they're creating themselves faster than they can be destroyed, you can see that the
shortest path reigns supreme. So the shortest path tells us
something super interesting about the minimal amount
of information required to propagate that motif in time and space, and it seems to be like some
kind of conservation law. - So one of the intuitions you have is the propagation of motifs, in time, will be done by the things
that can construct themselves in the shortest path. So like, you can assume
that most of the objects in the universe are built
in the most efficient way? Big leap I just took there. - Yes and no, because
there are other things. So in the limit, yes, because you want to tell the difference between things that
have required a factory to build them and just random processes. But you can find instances where the shortest path isn't taken for an individual object,
a individual function. And people go, "Ah, that means the shortest path isn't right." And then I say, "Well, I don't know, I think it's right still because there are other driving forces, so it's not just one molecule. Now that you start to
consider two objects, you have a joint assembly space, and now it's a compromise between not just making A
and B in the shortest path, you wanna be able to make A
and B in the shortest path, which might mean that
A is slightly longer. You have a compromise. So when you see slightly more nesting in the construction when
you take a given object that can look longer, well, that's because the overall function is the object is still
trying to be efficient. And this is still very hand-wavy. I maybe have no legs to stand on, but we think we're getting
somewhere with that. - And there's probably some
parallelization, right? - Yeah.
- So this is not sequential, the building is. Yeah, I guess-
- No, you're right. - When you're talking
about complex objects, you don't have to work sequentially, you can work in parallel. You can get your friends
together and they can- - Yeah. And the thing we're working on right now is how to understand
these parallel processes. Now there's a new thing we've introduced called assembly depth. And assembly depth can be lower than the assembly index for a molecule when they're cooperating together, 'cause exactly, parallel
processing is going on. And my team have been working
this out in the last few weeks because we're looking at
what compromises does nature need to make when it's
making molecules in a cell. And I wonder if, you know, I'm maybe like, well, I'm always leaping
out of my competence, but in economics, I'm just wondering if you could apply this in an economic process. It seems like capitalism is very good at finding the shortest
path, you know, every time. And there are ludicrous things that happen because actually the cost
function's been minimized. And so I keep seeing parallels everywhere where there are complex nested systems, where if you give it enough time and you introduce a bit of heterogeneity, the system readjusts and
finds a new shortest path. But the shortest path isn't
fixed on just one molecule now, it's in the actual existence
of the object over time. And that object could be a city, it could be a cell, it could be a factory. But I think we're going
way beyond molecules and my competence, so probably we should
go back to molecules. But, hey. - All right, before we get too far, let's talk about the assembly equation. Okay, how should we do this? Lemme just even read
that part of the paper. "We define assembly as the total amount of selection necessary
to produce an ensemble of observed objects,
quantified using equation 1." The equation basically has A on one side, which is the assembly of the ensemble, and then a sum from 1 to N, where N is the total
number of unique objects. And then there is a
few variables in there, that include the assembly index, the copy number, which we'll talk about. I don't remember you talking about that. That's an interesting addition, and I think a powerful one. Has to do with that you can create pretty
complex objects randomly. And in order to know
that they're not random, that there's a factory involved, you need to see a bunch of them. That's the intuition there. It's a interesting intuition. And then some normalization. What else is a N? - N - 1. Just to make sure that one object could be a one-off and random. And then you have more
than one identical object. That's interesting. - When there's two of a thing. That's interesting.
- Two of a thing is super important, especially if the assembly index is high. - So we could say several questions here. One, let's talk about selection. What is this term, selection? What is this term, evolution,
that we're referring to? Which aspect of Darwinian evolution are we referring to
that's interesting here? - Yeah so, you know the paper, we should talk about
the paper for a second. The paper, what it did
is it kind of annoyed- We didn't annoy. I mean, it got the attention, and obviously the angry
people were annoyed. - There's angry people in
the world. That's good. - So what happened is the
evolutionary biologists got angry. We were not expecting that. 'Cause we thought evolutionary
bodies would be cool. I knew that some, not many, computational complexity
people will get angry, 'cause I've kinda been poking them, and maybe I deserved it. But I was trying to poke
them in a productive way. And then the physicists kind of got grumpy because the initial
conditions tell everything. The prebiotic chemist got slightly grumpy because there's not
enough chemistry in there. And then finally, when
the creationist said it wasn't creationist enough, I was like, I've done my job. - Because you're basically saying that physics is not
enough to tell the story of how biology emerges. - I think so.
- And then they said a few physics is the beginning
and the end of the story. - Yeah. So what happened is the
reason why people put the phone down on the call of the paper. I mean, if you're the reading the paper like a phone call, they
got to the abstract. And in the abstract- - [Lex] First sentence is pretty strong. - The first two sentences
caused everybody- - "Scientists have grappled with reconciling biological evolution with the immutable laws of the
universe defined by physics." - True, right? There's nothing wrong with that statement. Totally true. - Yeah.
- Next one. "These laws underpin life's origin, evolution and the development of human culture and technology, yet they do not predict the
emergence of these phenomena." Wow. First of all, we should
say the title of the paper. This paper was accepted
and published in "Nature". The title is "Assembly theory explains and quantifies selection and evolution". A very humble title. And the entirety of the paper I think presents interesting
ideas but reaches high. - I would do it all again. This paper was actually
on the preprint server for over a year. - You regret nothing. - I think, yeah, I don't regret anything. - You and Frank Sinatra did it your way. - What I love about being a scientist is kind of sometimes, 'cause I'm a bit dim and I don't understand
what people are telling me. I wanna get to the point. This paper says, hey, the laws
of physics are really cool, the universe is great, but it's not intuitive that you just run the standard model and get life out. I think most physicists might go, yeah, you know, we can't just go back and say that's what happened 'cause physics can't explain
the origin of life yet. That doesn't mean it won't or can't, okay? Just to be clear, sorry, intelligent designers,
we are gonna get there. Second point, we say that evolution works but we don't know how evolution got going, so biological evolution
and biological selection. So for me, this seems
like a simple continuum. So when I mentioned
selection and evolution in the title, I think,
and in the abstract, we should have maybe prefaced that and said nonbiological selection and nonbiological evolution, and then that might have made
it even more crystal clear. But I didn't think that
evolutionary biology should be so bold to claim ownership of selection and evolution. And secondly, a lot of
evolutionary biologists seem to dismiss the
origin of life question, just say it's obvious. And that causes a real
problem scientifically. When the physicists are like, we own the universe, the universe is good, we explain all of it, look at us. And the biologists say
we can explain biology. And the poor chemist in the middle going, but, hang on. (laughing) And this paper kinda says, hey, there is an interesting disconnect between physics and biology. And that's at the point at which memories get made in chemistry through bonds. And hey, let's look at this close and see if we can quantify it. So yeah, I mean I never expected the paper to kind of get that much interest. And still, I mean it's only been published just over a month ago now. - So just to link on the selection. What is the broader sense
of what selection means? - Yeah, that's a really good. For selection, so this is where for me the concept of an object is
something that can persist in time and not die, but
basically can be broken up. So if I was gonna kind of bolster the definition of an objects. So if something can form and persist for a long period of time
under an existing environment that could destroy other, and I'm gonna use anthropomorphic terms, I apologize, but weaker
objects or less robust, then the environment
could have selected that. So a good chemistry example is if you took some carbon and you made a chain of carbon atoms. Whereas, if you took some, I don't know, some carbon, nitrogen, and oxygen and made chains from those, you'd start to get different
reactions and rearrangements. So a chain of carbon atoms
might be more resistant to falling apart under
acidic or basic conditions versus another set of molecules. So it survives in that environment. So the acid pond the resistant
molecule can get through. And then that molecule goes
into another environment. So that environment now
maybe being acid pond is a basic pond, or maybe
it's an oxidizing pond. And so, if you've got carbon and it goes in an oxidizing pond, maybe the carbon starts to
oxidize and break apart. So you go through all these
kind of obstacle courses, if you like, given by reality. So selection happens
when a object survives in an environment for some time. But, and this is the
thing that's super subtle, the object has to be
continually being destroyed and made by a process. So it's not just about the object now, it's about the process
and time that makes it. 'Cause a rock could just
stand on the mountainside for four billion years and
nothing happened to it. And that's not necessarily
really advanced selection. So for selection to
get really interesting, you need to have a turnover in time. You need to be continually
creating objects, producing them, what
we call discovery time. So there's a discovery time for an object. When that object is discovered, if it's say a molecule, that can then act on itself or the chain of events that caused itself to bolster its formation, then you go from discovery
time to production time, and suddenly you have more
of it in the universe, so it could be a
self-replicating molecule. And the interaction of the
molecule in the environment, in the warm little pond
or in the sea or wherever, in the bubble, could then start to build a
proto factory, the environment. So really, to answer your question, what the factory is. The factory is the environment, but it's not very autonomous, it's not very redundant, there's lots of things
that could go wrong. So once you get high
enough up the hierarchy of networks of interactions, something needs to happen, that needs to be compressed
into a smaller volume and made resistant or robust. Because in biology, selection and evolution is robust. That you have error correction built in. You have really, you know, there's good ways of basically making sure propagation goes on. So really the difference between inorganic abiotic
selection and evolution, and evolution and stuff
in biology is robustness, the ability to survive in lots
of different environments. Whereas, our poor little
inorganic soul molecule, whatever, just dies in lots of
different environments. So there's something
super special that happens from the inorganic molecule
in the environment, it kills it, to where you've got evolution and cells can survive everywhere. - Well, how special is that? How do you know those
kinds of evolution factors aren't everywhere in the universe? - I don't. And I'm excited, 'cause I think selection isn't special at all. I think what is special is the history of the environments on earth that gave rise to the first cell that now, you know, has taken all those environments and is now more autonomous. And I would like to think that, you know, this paper could be very wrong, (chuckles) but I don't think it's very wrong. It mean, it's certainly wrong, but it's less wrong than some
other ideas I hope, right? And if this inspires us to go and look for selection in the universe, 'cause we now have an equation where we can say we can
look for selection going on and say, oh, that's interesting, we seem to have a process that's giving us high copy number objects that also are highly complex. But that doesn't look
like life as we know it. And we use that and say, "oh, there's a hydrothermal vent. Or there's a process going on, there's molecular networks." Because the assembly
equation is not only meant to identify at the higher
end, advanced selection. What you get, I would call it in biology, your super advanced selection. And even, I mean, you could
use the assembly equation to look for technology and God forbid, we could talk about
consciousness and abstraction, but let's keep it primitive,
molecules and biology. So I think the real power
of the assembly equation is to say how much selection
is going on in this space. And there's a really simple
thought experiment I could do. Is, you know, have a little petri dish, and on that petri dish,
you put some simple food. So the assembly index of all the sugars and everything is quite low. And you put a single E. coli cell. And then you say, I'm
gonna I'm gonna measure this amount of assembly in the box. So it's quite low, but the rate of change of assembly, the ADT will go, voom sigmoidal as it eats all the food. And the number of E.
coli cells will replicate because they take all the food, they can copy themselves, the assembly index of all the molecules goes up, up, up, and up until the food is exhausted in the box. So now the E. coli's stop. I mean, die is probably a strong word. They stop respiring 'cause
all the food has gone. But suddenly the amount
of assembly in the box has gone up gigantically because
of that one E. coli factory has just eaten through, milled lots of other E. coli factories, run out of food and stopped. So in the initial box, although the amount of
assembly was really small, it was able to replicate and
use all the food and go up. And that's what we're trying
to do in the lab actually, is kinda make those kind of experiments and see if we can spot the emergence of molecular networks that
are producing complexity as we feed in raw materials and we feed a challenge, an environment, you know, we try and kill the molecules. And really that's the main kind of idea for the entire paper. - Yeah, and see if you
can measure the changes in the assembly index
throughout the whole system. Okay, what about if I
show up to a new planet, we go to Mars or some other planet from a different solar system, how do we use assembly index
there to discover alien life? - Very simply actually. Let's say we'll go to Mars
with a mass spectrometer with a sufficiently high resolution. So what you'll have to be able to do- So a good thing about mass spec is that you can select the
molecule from the mass, and then, if it's high enough resolution, you can be more and more sure that you're just seeing identical copies. You can count them. And then you fragment them and you count the number of fragments and look at the molecular weight. And the higher the molecular weight and the higher the number of fragments, the higher the assembly index. So if you go to Mars
and you take a mass spec with a high enough resolution
and you can find molecules, and I'll give a guide on earth, if you could find molecules say greater than 350 molecular weight
with more than 15 fragments, you have found artifacts
that can only be produced, at least on earth, by life. And now you would say, oh, well, maybe the geological process. I would argue very vehemently
that that is not the case. But we can say, look, if you don't like the cutoff on earth, go up higher, 30, 100, right? Because there's gonna be a point where you'll find a molecule with
so many different parts the chances of you getting a molecule that has 100 different parts and finding a million identical copies, you know, that's just impossible, that could never happen in
an infinite set of universes. - Can you just linger on
this copy number thing? A million different copies. What do you mean by copies, and why is the number of copies important? - Yeah, that was so interesting. I always understood the copy
number was really important but I never explained
it properly for ages. It goes back to this. If I give you a, I dunno, a really complicated molecule and I say it's complicated, you could say, hey,
that's really complicated, but is it just really random? And so I realized that ultimate randomness and ultimate complexity
are indistinguishable until you can see a
structure in the randomness. So you can see copies. - So copies implies structure. - [Lee] Yeah, the factory. - I mean, there's a deep
profound thing in there. 'Cause, like, if you just
have a random process, you're going to get a lotta complex, beautiful sophisticated things. What makes them complex
in the way we think life is complex or something like a factory that's operating under
a selection processes, there should be copies. Is there like some looseness about copies? Like, what does it mean for
two objects to be equal? - It's all to do with the the telescope or the microscope you're using. And so, at the maximum resolution. So then the nice thing about chemists is they have this concept of the molecule and they're all familiar
with the molecule, and molecules you can hold, you know, on your hand, lots of
them, identical copies. A molecule's actually a super
important thing in chemistry. To say, look, you can
have a mole of a molecule, so an Avogadro's number of
molecules and they're identical. What does that mean? That means that the molecular composition, the bonding and so on, the configuration is indistinguishable. You can hold them together, you can overlay them. So the way I do it is if I say, here's a bag of 10 identical molecules, let's prove they're identical. You pick one out of the bag and you basically observe
it using some technique, and then you take it away, and then you take another one out. If you observe it using technique and you see no differences,
they're identical. It's really interesting to get right, because if you take say two molecules. Molecules can be in different vibrational and rotational states, they're moving all the time. So with this respect, identical molecules
have identical bonding. In this case, we don't
even talk about chirality 'cause we don't have a chirality detector. So two identical molecules in one conception assembly theory, basically considers both
hands as being the same. But of course, they're
not, they're different. As soon as you have a chiral distinguisher to detect the left and the right hand, they become different. And so it's to do with
the detection system that you have and the resolution. - So I wonder if there's an art in science to which detection system is used when you show up to a new planet? So like, you're talking
about chemistry a lot today. We have kind of standardized
detection systems, right? Of how to compare molecules. So, you know, when you
start to talk about emojis, and language, and mathematical theorems, and, I don't know, more
sophisticated things at different scale, at a smaller scale than molecules, at a larger
scale of than molecules. Like if we look at the
difference between you and me, Lex and Lee, are we the same? Are we different? - Sure, I mean of course
we're different close up, but if you zoom out a little bit, it will morphologically look the same. You know, high in characteristics, hair length, stuff like that. - Well, also like the species. - [Lee] Yeah, yeah, yeah. - And also, there's a sense
why we're both from earth. - Yeah, I agree. I mean, this is the power of
assembly theory in that regard. So the way to look at it, if you have a box of objects, if they're all indistinguishable, we're using your technique, what you then do is you then
look at the assembly index. Now, if the assembly index
of them is really low, right? And they're all indistinguishable, then they're telling you that you have to go to another resolution. You know, it's kind of a sliding scale. It's kind of nice.
- Yeah, got it. So those two kind of are
at tension with each other, the number of copies
in the assembly index. - [Lee] Yeah. - That's really, really interesting. So okay, so you show up to a new planet, you'll be doing what? - [Lee] I would do mass spec. - On a sample of what? First of all, like how big
of a scoop do you take? Do you just take a scoop? So we're looking for primitive life. - Yeah, so if you're just going to Mars or Titan or Enceladus or somewhere. So a number of ways of doing it. So you could take a large scoop or you go through the
atmosphere and detect stuff. And you could make a life meter, right? One of Sara's colleagues,
an ASU, Paul Davies, keeps calling it a life meter. Which is quite a nice idea, because if you think about it, if you've got a living system that's producing these
highly complex molecules and they drift away
and they're in a highly kind of demanding environment, they could be burnt, right? So they could just be falling apart. So you want to sniff a
little bit of complexity and say, warmer, warmer, warmer. Oh, we've found life. We've found the alien. We found the alien Elon
Musk smoking a joint in the bottom of the cave on Mars, or Elon himself, whatever, right? And say, okay, found it. So what you can do is
the mass spectrometer, you could just look for
things in the gas phase, or you go on the surface, drill down, because you want to
find molecules that are- You've either gotta find
the source living system, because the problem with
just looking for complexity is it gets burnt away. So in a harsh environment, on, say, the surface of Mars, there's a very low probability that you're gonna find
really complex molecules because of all the radiation and so on. If you drill down a little bit, you could drill down a bit into soil that's billions of years old, then I would put in some
solvent, water, alcohol, or something, or take a scoop, make it volatile, put it
into the mass spectrometer, and just try and detect a high complexity, high abundant molecules. And if you get them, hey presto, you can have evidence of life. Wouldn't that then be
great if you could say, okay, we found evidence of life, now we want to keep the
life meter keep searching for more and more complexity until you actually find living cells. You can get those new living cells, and then you could
bring them back to earth or you could try and sequence them. You could see that they have
different DNA and proteins. - Go along on the gradient
of the life meter. How would you build a life meter? Let's say we're together starting a new- - Just a mass spectrometer.
- A new company launching a life meter.
- A mass spectrometer would be the first way of doing it. Just take-
- No, no, no. But that's one of the
major components of it. But I'm talking about like, what if it's a device, and branding, logo, we gotta talk through that. That's later. But what's the input? Like how do you get to a metered output? - So, I would take my life
meter, our life meter. There you go.
- Thank you. - Yeah, you're welcome. Would have both infrared and mass spec. So it would have two ports, so we could shine the light. And so what it would do is you
would have a vacuum chamber, and you would have an
electrostatic analyzer, and you'd have a monochromator
to producing infrared. So you'd take a scoop of the sample, put it in the life meter. It would then add a solvent
or heat up the sample so some volatiles come off. The volatiles would then be put into the mass spectrometer into the electrostatic trap, and you'd weigh the
molecules and fragment them. Alternatively, you'd shine
infrared light on them and count the number of bands. But you'd have to, in that case, do some separation, 'cause you want to separate in- And so, in mass spec, it's really nice and convenient 'cause you can separate electrostatically, but you need to have that. - Can you do it in real time? - Yeah, pretty much. Yeah, so let's go all the way back. Okay, we really gonna get
the Lex and Lee's life meter. - Oh yeah, Lex and Lee. It's good ring to it. - All right, so you have a vacuum chamber, you have a little nose. The nose would have a packing material. So you would take your sample, add it onto the nose,
add a solvent or a gas. It would then be sucked up the nose and that would separated using
what we call chromatography. And then, as each band comes off the nose, we'll then do mass spec and infrared. And in the case of the infrared, count the number of bands. In the case of mass spec, count the number of
fragments and weigh it. And then, the further up
in molecular weight range for the mass spec and the number of bands, you go up and up and
up from the, you know, dead, interesting, interesting,
over the threshold, oh my gosh, earth life. And then right up to the batshit crazy, this is definitely, you know, alien intelligence that's
made this life, right? You could almost go all the way there. Same with the infrared. And it's pretty simple. The thing that is really problematical is that for many years, decades, what people have done,
and I can't blame them, is they've rather been
obsessing about small biomarkers that we find on earth, amino acids, like single amino acids or evidence of small molecules and these things, and looking for those rather
that looking for complexity. The beautiful thing about
this is you can look for complexity without
earth chemistry bias or earth biology bias. So assembly theory is
just a way of saying, hey, complexity in abundance
is evidence of selection, that's how our universal
life media will work. - Complexity in abundance
is evidence of selection. Okay, so let's apply
our life meter to earth. You know, if we were just to apply assembly index
measurements to earth, what kinda stuff are we going to get? What's impressive about some
of the complexity on earth? - So we did this a few years ago when I was trying to
convince NASA and colleagues that this technique could work. And honestly, it's so funny
because everyone's like, "No, ain't gonna work." Because a chemist was saying, "Of course there are
complicated molecules out there you can detect that just from randomly." And I was like, "Really?" That's a bit like, I don't know, someone saying of course Darwin's textbook was just written randomly by
some monkeys and a typewriter. Just for me it was like, really? And I pushed a lot on the chemists now, and I think most of them are on board, but not totally. I really had some big arguments. But the copy number caught there, 'cause I think I confused
the chemist by saying, "one-off," and then when I made it clear about the copy number, I think that made it a little bit easier. - Just to clarify, a chemist might say that, of course, out there, outside of earth, there's complex molecules? - Yes.
- Okay. And then you're saying, "Wait a minute, that's like saying of course there are aliens out there?" - [Lee] Yeah, exactly that. - Okay, but you clarify
that that's actually a very interesting question and we should be looking
for complex molecules of which the copy number
is two or greater. - Yeah, exactly. So on earth, to coming back to earth, what we did is we took
a whole bunch of samples and we were running prebiotic
chemistry experiments in the lab. We took various inorganic minerals and extracted them, look at the volatile, because there's a special
way of treating minerals and polymers in assembly theory. In our life machine,
we're looking molecules. We don't care about polymers
because they're not volatile, you can't hold them. If you can't discern
that they're identical, then it's very difficult
for you to work out if there's undergone selection or they're just a random mess. Same with some minerals, but we can come back to that. So basically, what you do, we got a whole loads of samples. In organic ones, we've got scotch whiskey, and also took a Ardbeg, which is one of my favorite whiskeys, which is very peaty, and another-. - [Lex] What does peaty mean? In Scotland in Islay, which is a island, the whiskey is led to mature in barrels. And it's said that the peat, the complex molecules in the peat, might find their way
through into the whiskey. And that's what gives it
this intense brown color and really complex flavor. It's literally molecular
complexity that does that. And so, you know, vulc
is the complete opposite. It's just pure, right? - [Lex] So the better the whiskey, the higher the assembly index, or the higher assembly index,
the better the whiskey. - That's what I mean. I really love the deep,
peaty Scottish whiskeys. Near my house, there is one of the lowland distilleries
called Glengoyne. It's still beautiful
whiskey but not as complex. So for fun, I took some
Glengoyne whiskey in our bag and put them into the mass spec and measured the assembly index. I also got E. coli. So, the way we do it, take the E. Coli, break the cell apart, take it all apart. And also got some beer. And people were ridiculing us, saying that, "Oh, beer is
evidence of complexity." One of the computational complexity people who was just throwing- Kind of, he's very vigorous in his disagreement of assembly theory. He was he just saying, you know, "You don't know what you're doing. Even beer is more complicated than human." What he didn't realize is
that it's not beer per se, it's taking the yeast
extract, taking the extract, breaking the cells,
extracting the molecules, and just looking at the
profile of the molecules, see if there's anything
over the threshold. And we also put in a really
complex molecule, Taxol. So we took all of these, but also NASA gave us, I think, five samples, and they
wouldn't tell us what they are. They said, "No, we don't believe you can get this to work." And they really, you know, they gave us some super complex samples. And they gave us two fossils. One that was a million years old and one was at 10,000 years old, something from Antarctica's seabed. They gave us some immerses and
meteorite and a few others. Put them through the system. So we took all the samples, treated them all identically, put them into mass spec, fragmented them. And in this case, implicit in the measurement was- In mass spec, you only detect peaks when you've got more than, let's say, 10,000 identical molecules. So the copy number's already baked in but wasn't quantified, which is super important there. That was in the first paper, 'cause I was like, "it's
abundant, of course." And when he then took it all out, we found that the biological samples gave you molecules that had an assembly index greater than 15. And all the abiotic
samples were less than 15. And then we took the NASA samples, and we looked at the
ones with more than 15 and less than 15, and we gave them back to NASA. And like, "Oh gosh. Yep, dead, living, dead, living. You got it." And that's what we found on earth. - [Lex] So that's a success? - Yeah, a resounding success. - Well, can you just go back
to the beer and the E. Coli? So what's the assembly index on those? - So what you were able to do is like, we found high assembly
index molecules originating from the beer sample
and the E. coli sample. - [Lex] So the yeast and the beer. - I mean though, I didn't
know which one was higher. We didn't really do any detail there because now we are doing that. Because one of the things we've done, it's a secret, but I
can tell you. (laughing) - Nobody's listening. - Well, is that we've just mapped the tree of life using assembly theory, 'cause everyone said that you
can't do anything in biology. And what we're able to do is, I think there's two ways
of doing tree of life- Well, three ways, actually. - Yeah, what's the tree of life? - So the tree of life is
basically tracing back the history of life on earth for all the different species going back, who evolved from what, and it all goes all the way back to the first kind of life
forms, and they branch off. And like, you have plant kingdom, the animal kingdom, the fungi kingdom, you know, and different
branches all the way up. And the way this was classically done. And I'm no evolutionary biologist. The evolutionary biologists
tell me every day, at least 10 times. I want to be one though. I kinda like biology, it's kinda cool. - [Lex] Yeah, it's very cool. It's evolutionary.
- But basically what Darwin, and Mendel, and all of these people do, it's just they draw pictures, right? And they text though. They were able to draw pictures and say, "Oh, these look like common classes." - Yeah. They're artists, really,
they're just, you know? - But they we're able to
find out a lot, right, in looking at vibrates and vibrates, Cambrian explosion and all this stuff. And then came the genomic revolution, and suddenly everyone
used gene sequencing. And Craig Venter is a good example. I think he's gone around
the world in his yacht just taking up samples
looking for new species, where he's just found new species of life just from sequencing. It's amazing. So you have taxonomy,
and you have sequencing, and then you can also do a little bit of kind of molecular
kind of archeology like, you know, measure the samples and kind of form some inference. What we did is we were
able to fingerprint. We took a load of random
samples from all of biology, and we used mass spectrometry. And what we did now is not just look for individual molecules but we looked for coexisting molecules where they had to look at
their joint assembly space, where we were able to cut them apart and undergo recursion in the mass spec and infer some relationships. And we were able to recapitulate the tree of life using mass spectroscopy, no sequencing and no drawing. - All right, can you try to say that again with a little more detail? So, recreating. What does it take to
recreate the tree of life? What does the reverse engineering
process look like here? - So what you do is you
take an unknown sample, you bung it into the mass spec. 'Cause this comes from what you're asking, like, what do you see in E. coli? And so in E. Coli, it's not that the most sophisticated cells on earth make the most
sophisticated molecules. It is the coexistence of lots of complex molecules above a threshold. And so what we realize
is you could fingerprint different life forms. So fungi make really
complicated molecules. Why? 'Cause they can't move. They have to make everything on site. Whereas, you know, some
animals are like lazy. They can just go eat the fungi and they don't need to make very much. And so what you do is, so you take, I don't
know, the fingerprint, maybe the top number of high
molecular weight molecules you find in the sample, you fragment them to get
their assembly indices, and then what you can do is you can infer common origins of molecules. When the reverse engineering
of the assembly space, you can infer common roots and look at what's called
the joint assembly space. But let's translate that
into the experiment. Take a sample, bung it in the mass spec, take the top say 10
molecules, fragment them, and that gives you one fingerprint. Then you do it for another sample, you get another fingerprint. Now the question is you say, hey, are these samples
the same or different? And that's what we've been able to do. And by basically looking
at the assembly space that these molecules create. Without any knowledge of assembly theory, you are unable to do it. With the knowledge of assembly theory, you can reconstruct the tree. - How does knowing if
they're the same or different give you the tree? - Let's go to two leaves on different branches on the tree, right? What you can do, by counting
the number of differences, you can estimate how far
away their origin was. - Got it.
- And that's all we do. And it just works. But when we realized you
could even use assembly theory to recapitulate the tree of
life with no gene sequencing, we were like, "huh." - So this is looking at samples that exist today in the world. What about like things
that no longer exist? I mean, the tree contains
information about the past. Some of it is gone. - Yeah, absolutely. I would love to get old fossil samples and apply assembly theory mass spec and see if we can find new forms of life. They are no longer
amenable to gene sequencing 'cause the DNA is all gone. 'Cause DNA and RNA is quite unstable, but some of them are complex molecules. It might be there, it
might give you a hint of something new. Or, wouldn't it be great
if you find a sample that's worth really persevering and doing, you know, doing the proper extraction to, you know, PCR and so on, and then sequence it, and then put it together. - So when a thing dies, you can still get some
information about its complexity. - Yeah. And it appears that
you can do some dating. Now, there are really good techniques. There's radiocarbon dating. There is longer dating, going and looking at
radioactive minerals and so on. And you can also, in bone, you can look at the- What happens after something dies, you get what's called racemization, where the chirality in the
polymers basically changes and you get decomposition. And the rate of the deviation
from the pure enantiomer to the mixture, it gives you
a timescale on it, half-life, so you can date when it died. I wanna use assembly theory
to see if I can use it and date death and things, and trace the tree of life, and also decomposition of molecules. - [Lex] Do you think it's possible? - Oh yeah. Without a doubt. It may not be better. 'Cause like, I was just at a conference where some brilliant people were looking at isotope enrichment and looking at how life enriches isotopes, and there are really sophisticated
stuff that they're doing. But I think there's some
fun to be had there, because it gives you another dimension of dating how old is this
molecule in terms of- Or, more importantly, how long ago was this
molecule produced by life? The more complex the molecule, the more prospect for decomposition, oxidation, reorganization,
loss of chirality and all that jazz. But what life also does is it enriches, as you get older, the
the amount of carbon 13 in you goes up because
of the way the bonding is in carbon 13. So it has a slightly different
bond strength than you, it's called the kinetic isotope effect. So you can literally date
how old you are, you know, or when you stop metabolizing. So you could date someone
how old they are, I think. I'm making this up, this might be right, but I think it's roughly right. The amount of carbon 13 you have in you, you can kind of estimate how old you are. - How old living humans
are or living organisms? - Yeah, like you could say, oh, this person is 10 years old, and this person 30 years old because they've been
metabolizing more carbon, and they've accumulated it. That's the basic idea. It's probably a completely
wrong timescale but- - Signatures of chemistry are fascinating. So you've been saying a
lot of chemistry examples for assembly theory. What if we zoom out and look
at a bigger scale of an object, you know, like really complex objects, like humans or living organisms that are made up of, you know, millions or billions of other organisms? How do you try to apply
assembly theory to that? - At the moment, we should be able to do
this to morphology in cells. So we're looking at cell surfaces. And really, I'm trying to extend further. It's just that, you know, we work so hard to get this paper out and people to start discussing the ideas. But it's kinda funny, because I think the
penny is falling on this. - What's it mean for
a penny to be falling? - I mean, the penny's dropped, right? 'Cause a lotta people were like, "it's rubbish, it's rubbish. You've insulted me. It's wrong." And, you know, I mean
the paper got published on the 4th of October. It had 2.3 million
engagements on Twitter, right? And it's been downloaded over
a few hundred thousand times. And someone actually said to me, wrote to me and said, "This is an example of
really bad writing," and what not to do. And I was like, if all of my papers got read this much, 'cause that's the objective. If I have a publishing, a paper, I want people to read it. I wanna write that badly again. - Yeah. I don't know what's the deep insight here about the negativity in the space. I think it's probably the immune system of the scientific community making sure that there's no bullshit
that gets published. And that it can overfire, it can do a lot of damage, it can shut down conversations in a way that's not productive. - I mean, I'll answer your question about the hierarchy in assembly, but let's go back to the perception, people saying the paper was badly written. I mean, of course we could improve it. We could always improve the clarity. - Let's go there before
we go to the hierarchy. You know, it has been criticized
quite a bit, the paper. What has been some criticism
that you found most powerful, like, that you can understand
and can you explain it? - Yes. The most exciting criticism came from the evolutionary biologist, telling me that they thought that origin of life was a solved problem. And I was like, whoa,
we're really onto something because it's clearly not. And then when you poked them on that, they just said, "No, you
don't understand evolution." And I said, "No, no, I don't think you
understand that evolution had to occur before
biology and there's a gap." For me, that misunderstanding, and that did cause an immune response which was really interesting. The second thing was the
fact that physicists- Well, the physicists were
actually really polite, right? And really nice about it. But they just said, "Huh, we're not really sure about the initial conditions thing." But this is a really big debate that we should certainly get into because you know the emergence of life was not encoded in the initial
conditions of the universe. And I think assembly theory
shows why it can't be. - Okay, sure. If you could say that again. - The the emergence of life was not and cannot in principle be encoded in the initial
conditions of the universe. - Just to clarify what you
mean by life is like what? High assembly index objects? - Yeah. And this goes back to
your favorite subject. - What's that?
- Time. (both chuckling) - Right, so why? What does time have to do with it? - I mean, probably we
can come back to it later if we have time. I think I now understand
how to explain how- You know, lots of people got angry with the assembly paper, but also the ramifications of this is how time is fundamental in the universe and this notion of combinatorial spaces. And there are so many layers on this, but I think you have to become
an intuitionist mathematician and you have to abandon
platonic mathematics. And also, platonic mathematics
has left physics astray. But there's a lot unpack there. So we can go to the- - Platonic mathematics. Okay. The evolutionary biologists criticize it because the origin of life is understood and it doesn't require an explanation that involves physics. - Yeah.
- That's their statement. - Well, I mean, they said
lots of confusing statements. Basically, I realized the
evolutionary biology community that were vocal, and some
of them were really rude, really spiteful and needlessly so, right? Because look, you know, people misunderstand publication as well. Some of the people has said, "How dare this be published in nature? This is, you know, what
a terrible journal." And I said to people, "Look, this is this is a brand new idea that's not only
potentially going to change the way we look at biology, it's gonna change the way
we look at the universe." And everyone's, like, saying, "How dare you? How dare you be so grandiose?" I'm like, "No, no, no, this is not hype." We're not like saying we've invented some, I don't know, we've discovered a alien in a closet somewhere just for hype. We genuinely mean this to
genuinely have the impact or ask the question. And the way people jumped on that was a really bad
precedent for young people who wanna actually do something new because this makes a bold claim. And the chances are that it's not correct. But what I wanted to do
is a couple of things. Is I wanted to make a bold claim that was precise and
testable and correctable. Not another wooly information
in biology argument, information Turing machine,
blah, blah, blah, blah, blah. A concrete series of statements that can be falsified and explored, and either the theory could
be destroyed or built upon. - Well, what about the criticism of you're just putting
a bunch of sexy names on something that's already obvious? - Yeah, that's really good. So the assembly index of
a molecule is not obvious. No one had measure it before. And no one has thought to
quantify selection complexity and copy number before in such
a primitive quantifiable way. I think the nice thing about this paper, this paper is a tribute to all the people that understand that biology does something very interesting. Some people call it negentropy, some people call it, think about, you know, organizational principles. Lots of people were not
shocked by the paper because they'd done it before. A lot of the arguments we got, some people said, "Oh, it's rubbish. Oh, by the way, I had this
idea 20 years before." I was like, "Which one?" The rubbish part or the
really revolutionary part? So this kind of plucks
two strings at once. It plucked the there is
something interesting the biologies are, as
we can see around us, but we haven't quantified yet. And what this is, is the first
stab at quantifying that. So the fact that people
said this is obvious but it's also- So if it's obvious, why
have you not done it? - Sure, but there's a
few things to say there. One is, you know, this is in part a philosophical framework
because, you know, it's not like you can apply this generally to any object in the universe. It's very chemistry focused. - Yeah, well, I think you will be able to, we just haven't got there robustly. So if we can say how can we- Let's go up a level. So if we go up from a level, let's go up from molecules to cells, 'cause you would jump to people and I jump to emoticons, and both were good and
they will be assembly. - [Lex] Yeah, let's stick with cells. Yeah, good point. - So if we go from
molecules to assemblies, and let's take a cellular assembly. A nice thing about a cell is you can tell the
difference between a eukaryote and a prokaryote, right? The organelle are specialized differently. We then look at the cell surface, and the cell surface has
different glycosylation patterns and these cells will stick together. Now let's go up a level. In multicellular creatures, you have cellular differentiation. Now if you think about
how embryos develop, you go all the way back, those cells undergo differentiation on a causal way that's biomechanically a feedback between the
genetics and biomechanics. I think we can use assembly theory to apply to tissue types. We can even apply it to
different cell disease types. So that's what we're doing next, but we are trying to walk. You know, the thing is, I wanna leap ahead to go, well, we'll apply it to culture. But clearly, you can apply
it to memes and culture. And we've also applied assembly theory to CAs, and not as you think. - [Lex] Cellular automata, by the way. - Yeah, yeah, to cellular automaton, not just as you think. Different CA rules were
invented by different people at different times. And one of my coworkers, a very talented chap, basically was like, "Oh, I can realize that different people had different ideas with different rules, and they copied each other and made slightly different
cellular automaton rules, and they looked at them online." And so he was able to
affirm an assembly index and copy number of rule
whatever doing this thing. But I digress. But it does show you can
apply it at a higher scale. So what do we need to do to
apply assembly theory to things? We need to agree there's a
common set of building blocks. Well, in a multicellular creature, you need to look back in time. So there is the initial cell, which a creature is fertilized
and then starts to grow, and then there is cell differentiation. And you have to then make that
causal chain both on those. So that requires development
of the organism in time. Or if you look at the cell
surfaces and the cell types, they've got different features on the cell walls and inside the cell. So we're building up. But obviously, I wanna leap
to things like emoticons, language, mathematical theorems. - Yeah, but that's a very
large number of steps to get from a molecule to the human brain. - Yeah, and I think they are related but in hierarchies of emergence, right? So you shouldn't compare them. I mean, the assembly
index of a human brain, what does that even mean? Well, maybe we can look at the morphology of the human brain. Say all human brains have these number of features in common. And then let's look at a brain in a whale or a dolphin or a chimpanzee
or a bird and say, okay, let's look at the
assembly indices number of features in these. And now the copy number is just a number of how many birds are there, how many chimpanzees are there, how many humans are there? - But then you have to discover for the features that
you would be looking for. - Yeah, and that means
you need to have some idea of the anatomy. - But there's an automated
way to discover features? - I guess so. I mean, and I think this is a good way to apply machine learning
and image recognition just to basically characterize things. - To apply compression
to it to see what emerges and then use the thing. The features used as
part of the compression as the thing that is searched for when you're measuring assembly
index and copy number. - And the compression has to be remember the assembly universe, which is you have to go
from assembly possible to assembly contingent. 'Cause assembly possible
or possible brains or possible features all the time. But we know that on the tree of life, and also on the lineage of
life, going back to LUCA, the human brain just didn't
spring into existence yesterday, it is a long lineage of
brains going all the way back. And so, if we could do assembly theory to understand the development, not just in evolutionary history but in biological development as you grow, we are gonna learn something more. - What would be amazing is if
you can use assembly theory, this framework, to show the increase in the assembly index associated with, I don't know, cultures or pieces of text like language or images and so on, and illustrate without knowing
the data ahead of time, just kinda like you did with NASA, that you were able to demonstrate that it applies in those other contexts. I mean, and that, you know,
probably wouldn't at first and you have to evolve the theory somehow. You have to change it, you
have to expand it, you know? - [Lee] I think so. - But like that. I guess this is, as a paper, a first step in saying, okay, can we create a general framework for measuring complexity of
objects, for measuring life, the complexity of living organisms. - [Lee] Yeah. - That's what this is reaching for. - That is the first step. And also to say, look, we have a way of quantifying selection and evolution in a fairly, not mundane, but a fairly mechanical way. Because before now, you know, the ground truth for
it was very subjective. Whereas here, we're talking
about clean observables. And there's gonna be layers on that. I mean, with collaborators right now, we already think we can do
assembly theory on language. And not only that. Wouldn't it be great if we can figure out how under pressure
language is gonna evolve and be more efficient, 'cause you're gonna wanna transmit things. And again, it's not
just about compression, it is about understanding how you can make the most of the architecture
you've already built. And I think this is something beautiful that evolution does. We're reusing those architectures. We can't just abandon
our evolutionary history. And if you don't wanna abandon
your evolutionary history, and you know that evolution
has been happening, then assembly theory works. And I think that's the
key comment I wanna make, is that assembly theory
is great for understanding when evolution has been used. The next jump is when we go to technology. 'Cause of course, if you
take the M3 processor. I haven't bought one yet, I can't justify it, but
I want to at some point. The M3 processor arguably, there's quite a lot of features and quite large number. The M2 came before it, then the M1, all the way back. You can apply assembly theory to microprocessor architecture. It doesn't take a huge leap to see that. - I'm a Linux guy, by the way, so your examples go away over my head. - [Lee] Yeah, well, whatever. - Is that a fruit company of some sort? I don't even know. Yeah, there's a lotta interesting stuff to ask about language. Like you could look at- How would that work? You could look at GPT-1,
GPT-2, GPT-3, 3.5, 4 and try to analyze the kind
of language it produces. I mean, that's almost trying
to look at assembly index of intelligent systems. - Yeah, I mean, I think the thing about large language models, and this is a whole hobby
horse I have at the moment, is that obviously they're all about- The evidence of evolution
in the large language model comes from all the people that
produced all the language. And that's really interesting. And all the corrections in
the Mechanical Turk, right? - [Lex] Sure. - And so-
- But that's the part of the history, part of
the memory of the system. - Exactly. So it would be really
interesting to basically use an assembly-based approach
to making language in a hierarchy, right? My guess is that we might
be able to build a new type of large language model
that uses assembly theory that it has more understanding of the past and how things were created. Where basically, with a thing with LLMs is they're like everything
everywhere all at once, splat, and make the user happy. So there's not much
intelligence in the model. The model is how the human
interacts with the model. But wouldn't it be great
if we could understand how to embed more
intelligence in the system? - What do you mean by intelligence there? Like you seem to associate intelligence with history and memory. - I think selection produces intelligence. - Wait, what? You're almost implying that
selection is intelligence. No.
- Yeah, kind of. I would go out and limb and say that. But I think it's a little bit more. Human beings have the ability to abstract and they can break beyond selection. Darwinian selection. Because a human being
doesn't have to basically do trial and error. Like they can think about it and say, oh, that's a
bad idea, won't do that. And then technologies and so on. - So we escaped Darwinian evolution and now we're onto some other
kind of evolution I guess? - Yeah.
- Higher level evolution. - And the assembly theory will
measure that as well, right? Because it's all a lineage. - Okay, another piece of criticism, or by way of question is, how's assembly theory
or maybe assembly index different from Kolmogorov complexity? So for people who don't know, a Kolmogorov complexity of an object is the length of a
shortest computer program that produces the object as output. - Yeah, there seems to be a disconnect between the computational
approach, so yeah. So a Kolmogorov measure requires a Turing machine, requires a computer. And that's one thing. And the other thing is assembly theory is supposed to trace the process by which life evolution emerged, right? There's a main thing there. There are lots of other layers. So Kolmogorov complexity, you can you can approximate
Kolmogorov complexity but it's not really telling you very much about the actual- It's really telling you about like, your compression of your dataset. And so, that doesn't really
help you identify the turtle, in this case is the computer. And so what assembly theory does is, I'm gonna say, (chuckles) there's a trigger warning
for anyone listening who loves complexity theory. I think that we're gonna show that AIT is a very important
subset of assembly theory, because here's what happens. I think that assembly theory allows us to understand when were
selections occurring, selection produces factories and things, factories in the end produce computers, then algorithmic information
theory comes out of that. The frustration I've
had with looking at life through this kind of information theory is it doesn't take into account causation. So the main difference
between assembly theory and all these complexity measures is there's no causal chain. And I think that's the main- - That's the causal chains at
the core of assembly theory. - Exactly. And if you've got all your
data in a computer memory, all the data's the same. You can access it in the same type of way, you don't care, you just compress it, and you either look at the program runtime or the shortest program. And that for me is
absolutely not capturing what its selection does. - But Assembly theory looks at objects, it doesn't have information
about the object history. It's going to try to infer that history by looking for the
shortest history, right? The object doesn't like
have a Wikipedia page that goes with it about its history. - I would I would say it does in a way, and it is fascinating to look at. So you've just got the object and you have no other
information about the object. What assembly theory allows you to do just with the object is to- And the word infer is
correct, I agree with infer. So it's, you know, like say
well that's not the history, but something really
interesting comes from this. The shortest path is
inferred from the object. That is the worst case
scenario if you have no machine to make it. So that tells you about the
depth of that object in time. And so, what assembly
theory allows you to do is, without considering any
other circumstances, to say from this object how
deep is this object in time if we just treat the object as itself without any other constraints. And that's super powerful because the shortest path then allows you to say, oh, this object
wasn't just created randomly, there was a process. And so, assembly theory
is not meant to, you know, one up AIT or to ignore the factory. It's just to say, hey,
there was a factory, and how big was that factory
and how deep in time is it? - But it's still
computationally very difficult to compute that history, right? For complex objects. - It is. It becomes harder. But one of the things that's super nice is that it constrains your
initial conditions, right? It constrains where you're gonna be. So one of the things we're doing right now is applying assembly
theory to drug discovery. Now what everyone's doing right now is taking all the proteins
and looking at the proteins and looking at molecules not the proteins. Why not instead look at the molecules that are involved in interacting with the receptors over time, rather thinking about and use
the molecules evolve over time as a proxy for how the
proteins evolved over time., and then use that to constrain
your drug discovery process. You flip the problem 180 and focus on the molecule evolution
rather than the protein. And so you can guess in the
future what might happen. So you rather than having to consider all possible molecules, you know where to focus. And that's the same
thing if you're looking in assembly spaces for an object where you don't know the entire history but you know in the history of this object it's not gonna have some other motif there that doesn't appear in the past. - But just even for the drug
discovery point you made, don't you have to
simulate all of chemistry to figure out how to
come up with constraints through the molecules and- - No.
- I don't I don't know enough about producing. - This is another thing
that I think causes- 'Cause this paper goes
across so many boundaries. So chemists have looked at this and said this is not correct reaction. It's like, no, it's a graph. (laughing) - Sure, there's a assembly index and shortest path examples
here on chemistry. - Yeah. And what you do is you look
at the minimal constraints on that graph. Of course it has some
mapping to the synthesis, but actually you don't have
to know all of chemistry, you can build up the
constraints space rather nicely. But this is just at the beginning, right? There are so many
directions this could go. And I said, it could all be wrong but hopefully it's less wrong. - What about the little
criticism I saw of do you- By way of question, do you consider the
different probabilities of each reaction in the chain? So like, that there could be different- When you look at a chain of events that led up to the creation of an object, doesn't it matter that
some parts in the chain are less likely than others? - No - It doesn't matter.
- No, no. Well, let's go back. So, no not less likely, but it reacts so- So, no. So let's go back to what
we're looking at here. So the assembly index is the minimal path that could have created that
object probabilistically. So imagine you have all
your atoms in the plasma, you got enough energy, there's collisions. What is the quickest way you
could zip out that molecule with no reaction constraints? - How do you define quickest there then? - It's just basically a
walk on a random graph. So we make an assumption that basically the timescale for forming the bonds. So, no, I don't wanna say that because then it's gonna have
people getting obsessing about this point, and your
criticism is a really good one. What we're trying to say is like this puts a lower bound on something. Of course, some reactions are
less possible than others. But actually, I don't think
chemical reactions exist. - Oh, boy. What does that mean? Why don't chemical reactions exist? - I'm writing a paper right now that I keep being told I have to finish, and it's called "The Origin
of Chemical Reactions". And it merely says that reactivity exists, as controlled by the laws
of quantum mechanics, and reactions, chemists
put names on reactions, you could have like, I don't know, the Wittig reaction, which
is by, you know, Wittig. You could have the Suzuki
reaction which is by Suzuki. Now what are these reactions? So these reactions are
constrained by the following. They're constrained by the fact they're on planet earth, 1G,
298 kelvin, and one bar. So these are constraints. They're also constrained by the chemical composition of earth. Oxygen, availability, all this stuff. And that then allows us
to focus on our chemistry. So when a chemist does a reaction, that's a really nice compressed shorthand for constraint application. Glass flask, pure reagent, temperature pressure,
boom, boom, boom, boom, control, control,
control, control, control. So of course, we have bond energies. So the bond energies are kind
of intrinsic in a vacuum. So the bond energy. You have to have a bond. And so, for assembly theory to work, you have to have a bond, which means that bond
has to give the molecule a certain half-life. So you're probably gonna find later on that some bonds are weaker. When you look at the
assembly of some molecules, you're gonna miscount the
assembly of the molecule 'cause it falls apart too quickly, 'cause the bonds just form. But you can solve that
with looking at infrared. So when people think
about the probability, they're kinda misunderstanding. Assembly theory says
nothing about the chemistry because chemistry is chemistry and their constraints
are put in by biology. There was no chemist
at the origin of life, unless you believe in
the chemist in the sky and they were, you know,
it's like Santa Claus, they had a lot of work to do. But chemical reactions do not exist, the constraints that allow
chemical transformations to occur do exist. - Okay, okay. So there's no chemical reactions, it's all constraint application which enables the emergence of- What's a different word
for chemical reaction? - Transformation.
- Transformation. - [Lee] Yeah, like a function. It's a function.
- Yeah. But no, but I love chemical
reactions as a shorthand and so the chemists don't all go mad. I mean, of course chemical
reactions exist on earth. - It's a shorthand
- It's a shorthand for these constraints. - Right. So assuming all these
constraints that we've been using for so long that we just assume
that that's always the case, in natural language conversation. - Exactly. The grammar of chemistry of
course emerges in reactions and we can use 'em reliably. But I do not think the Wittig reaction is accessible on Venus. - Right, and this is useful
to remember, you know, to frame it as constraint
application is useful for when you zoom out
to the bigger picture of the universe and
looking at the chemistry of the universe and then starting
to apply assembly theory. - Yeah.
- That's interesting. That's really interesting. (chuckles) Well, we've also
pissed off the chemist now. - Oh, I expect they're pretty happy, well most of them. (laughing) - Everybody deep down is happy, I think. They're just sometimes feisty, that's how they have fun. - Everyone is grumpy on some days when- The problem with this paper is you- I used to do this
occasionally when I was young. Go to a meeting and just
find a way to offend everyone at the meeting simultaneously. Even the factions that
don't like each other, they're all unified in their hatred of you just offending them. This paper, it feels
like the person that went to the party and offended
everyone simultaneously, so they stop fighting with themselves and just focused on this paper. - Maybe just a little insider
interesting information. What were the editors of "Nature", their reviews and so on, how difficult was that process? 'Cause this is a pretty, like, big paper. - When we originally sent the paper, we sent the paper, and the
editor said that, you know- This was quite a long process. We sent the paper, and the
editor gave us some feedback and said, "You know, I don't think it's that interesting." Or, "It's a hard concept." And the editor gave us some feedback, and Sara and I took a
year to rewrite the paper. - Was the nature of the
feedback very specific on like, this part or this part, or was it, like, what
are you guys smoking? - Yeah, it was kind of the latter. What you smoking?
- Okay. (Lee laughing) - But polite and there's promise. - Yeah, well, the thing is, the editor was really critical, but in a really professional way. And I mean, for me, this was
the way science should happen. So when it came back, you know, we had too many equations in the paper. If you look at the preprint, they're just equations everywhere, like there are 23 equations. And when I said to Abhishek, who was the first author, and we've gotta remove all the equations, but my assembly equation's staying. And Abhishek was like,
"You know, no, we can't." I said, "Well, look, if we
want to explain this to people, there's a real challenge." And so Sara and I went through the, I think it was actually
160 versions of the paper, but basically we got to
version 40 or something. We said, "Right, zero, let's start again." So we wrote the whole paper again. We knew the entire archive.
- Amazing. - And we just went bit
by bit by bit and said, "What is it we wanna say?" And then we sent the paper in, and we expected it to be rejected and not even go to review. And then we got the notification back, it had gone to review, and we were like, "Oh my God,
it's so gonna get rejected. How's it gonna get rejected?" 'Cause the first assembly paper on the mass spec we sent to "Nature" went through six rounds of
review and were rejected, right? And this by a chemist who just said, "I don't believe you, you
must be committing fraud." And long story, probably a boring story. But in this case, it went out to review, the comments came back, and the comments were incredibly- No, they were very deep comments from all the reviewers. But the nice thing was
the reviewers were kind of very critical but not dismissive. They were like, "Oh, really? Explain this. Explain this. Explain this. Explain this."
- That's great. - "Are you sure it's not Kolmogorov? Are you sure it's not this?" And we went through, I think, three rounds of review pretty quick. And the editor went, "Yeah, it's in." - But maybe you could just
comment on the whole process. You've published some pretty huge papers on all kinds of topics
within chemistry and beyond. Some of them have some
little spice in them, a little spice of crazy. Like Tom Waits says, "I like my town with a
little drop of poison." So, you know, it's not a mundane paper. So what's it like psychologically to go through all this process
to keep getting rejected to get reviews from people
that don't get the paper or all that kind of stuff? Just from a question of a scientist, what is that like? - I mean, this paper, for me, kind of- 'Cause this wasn't the first time we tried to publish assembly theory
at the highest level, the "Nature" communications
paper on the mass spec on the idea went to
"Nature" and got rejected. Look, it went through six rounds of review and got rejected. And I just was so confused
when the chemist said, "This can't be possible, I do not believe you
can measure complexity using mass spec. And also, by the way, complex molecules can randomly form." And we're like, "But look at the data. The data says." And they said, "No, no,
we don't believe you." I just wouldn't give up. (laughing) And the editor, in the end, different editors actually, right? - What's behind that never giving up? When you're sitting there
10 o'clock in the evening, there's a melancholy
feeling that comes over you, and you're like, "Okay, this
is rejection number five." Or it's not rejection but
maybe it feels like a rejection because of the, you know, the comments are that you totally don't get it. Like what gives you strength
to keep going there? - I don't know. I don't normally get
emotional about papers. It's not about giving up because we wanna get it published, 'cause we
want the glory or anything. It's just like, why don't you understand? So why I would just try to
be as rational as possible and say, "Yeah you didn't like it, tell me why?" And then- Sorry. Silly.
- And you part- - I never get emotional
about papers normally, but I think what we do is
you just compressed, like, five years of angst from this. - So it's been rough? - It's not just rough,
it's like it happened. You know, I came up with
the assembly equation, you know, remote from Sara in Arizona and the people at SFI, I felt like I was a mad person, like, you know, the guy
depicted in "A Beautiful Mind". Not the actual genius part, but just the, gibberish, gibberish,
gibberish. (laughing) Because I kept writing expanded. And I have no mathematical ability at all, and I was making these
mathematical expansions where I kept seeing the same motif again. I was like, "Oh, I think
this is a copy number." The same string is coming
again, again, again. I couldn't do the math. And then I realized the copy
number fell outta the equation and everything collapsed down. I was like, "Oh, that works, kind of." So we submitted the paper, and then when it was
almost accepted, right? The mass spec one. And there was astrobiologists,
said, "Great." you know, a mass
spectroscopist said, "Great," and the chemist went, "Nonsense." Like, "Biggest pile of nonsense
ever, fraud," you know? And I was like, "But why fraud?" And they just said, "Just because." And I was like, "Well." And I could not convince
the editor in this case. The editor was just so off, 'cause they see it as like a kind of a, you know, you're wasting my time. And I would not give up. I wrote. I went and dissected,
you know, all the parts. And I think although, I mean, I got upset about it, you know, it was kind of embarrassing
actually, but I guess- - [Lex] (speaks faintly) beautiful. - But it was just trying to understand why they didn't like it. So a part of me was
like really devastated. And a part of me was super excited, 'cause I'm like, "Huh, they
can't tell me why I'm wrong." And this kinda goes back to, you know- When I was at school, I was in a kinda learning
difficulties class and I kept going to the teacher and say, you know, "What do I
do today to prove I'm smart?" And they were like, "Nothing, you can't." I was like, "Gimme a job," you know, "gimme something to do. Gimme a job to do. Something just to do as we-" And I kinda felt like that a bit when I was arguing with the- And not arguing, there
was no (indistinct), and I wasn't telling the
editor they were idiots or anything like this, or the reviewers, I kept it strictly, like, factual. And all I did is I just
kept knocking it down bit by bit by bit by bit by bit. It was ultimately rejected, and it got published though, elsewhere. And then, the actual experimental data. In this paper, the
experimental justification was already published. So when we did this one and
we went through the versions and then we sent it in, and in the end it just got accepted. We were like, "Well,
that's kinda cool, right?" The first author was like, "I can't believe it got accepted." Like, nor am I, but it's great. It's like, it's good. And then when the paper was published, I was not expecting the backlash. I was expecting computational- Well, no actually, I was
just expecting one person who'd been trolling me
for a while about it just to carry on trolling. But I didn't expect the backlash. And then, I wrote to the
editor and apologized. And the editor was like,
"What are you apologizing for? It was a great paper. Of course it's gonna get backlash, you said some controversial
stuff, but it's awesome." - Well, I think it's a
beautiful story of perseverance, and the backlash is just a
negative word for discourse, which I think is beautiful. That's the science. - You know, when it got accepted and people were saying we're
kind of like hacking on it. I was like, "Papers are not gold medals." The reason I wanted to publish that paper in "Nature" is because it says, "Hey, there's something
before biological evolution." You have to have that if you're not a creationist, by the way. This is an approach. First time someone has
put a concrete mechanism, sorry, a concrete quantification. And what comes next, you're
pushing on, is a mechanism. And that's what we need to get to is an autocatalytic set,
self-replicating molecules, some other features that come in. And the fact that this
paper has been so discussed, for me, is a dream come true. Like, it doesn't get better than that. If you can't accept a
few people hating it. And the nice thing is, the thing that really makes me happy is that no one has attacked
the actual physical content. Like, you can measure the assembly index, you can measure selection now. Well, either that's helpful or unhelpful. If it's unhelpful, this
paper will sink down and no one will use it again. If it's helpful, it'll help
people build a scaffold on it, and we'll start to
converge to a new paradigm. So I think that that's the
thing that I wanted to see, you know, my colleagues,
authors, collaborators, and people were like, "You've just published this
paper, you're a chemist. Why have you done this? Like, who are you to be
doing evolutionary theory?" Like, well, "I dunno. I mean, sorry, did I need to-" - Cause anyone to do anything. Well, I'm glad you did. Before coming back to origin of life and these kinds of questions, you mentioned learning difficulties. I didn't know about this. So what was it like? - I wasn't very good at school, right? - [Lex] This is when you were very young? - Yeah, yeah, in primary school. My handwriting was really poor, and apparently I couldn't read, and my mathematics was very poor. So they just said this is a problem, they identified it. My parents kind of at
the time were confused because I was busy taking things apart, buying electronic junk from the shop, trying to build computers and things. And then, when I was I think about- The major transition in my stupidity, like, you know, everyone thought I wasn't that stupid when I- Basically, everyone thought I was faking. I like stuff and I was
faking wanting to be- So I always wanted to be a scientist. So five, six, seven years old, be a scientist, take things apart. And everyone's like, "Yeah, this guy wants to be a scientist, but he's an idiot." (laughing) So everyone was really confused, I think, at first, that I wasn't
smarter than I, you know, was claiming to be. And then I just basically didn't do well in any of the tests and I went down and
down and down and down. And I was kinda like, "Huh, this is really embarrassing." I really like maths and
everyone says I can't do it. I really like kind of, you know, physics and chemistry and science, and people say you can't read and write. And so, I found myself in a
learning difficulties class at the end of primary school and the beginning of secondary school. In the UK, secondary school
is like 11, 12 years old. And I remember being put
in the remedial class. And the remedial class
was basically full of, well, three types of people. There were people that had quite violent, right, you know?
- Yeah. - And there were people
who couldn't speak English. And there were people that
really had learning difficulties. The one thing I can
objectively remember was- I mean, I could read. I like reading. I read a lot. I'm a bit of a rebel. I refused to read what I was told to read. And I found it difficult
to read individual words in the way they were told. But anyway, so I got
caught one day teaching someone else to read. And they said, "Okay, we
don't understand this." I always knew I wanted to be a scientist, but didn't really know what that meant. And I realized you had
to go to university. And I thought I can just go to university, it's like curious people. Like, no, no, no, you need to have these, you have to be able to
enter in these exams to get this grade point average. And the fact is the exams
you've been entered into, you're just gonna get C, D, or E. You can't even get A, B, or C, right? This is the UK GCSEs. I was like, "Oh, shit." And I said, "Can you just
put me into the high exams?" They said, "No, no, you're gonna fail. There's no chance." So my father kind of intervened and said, you know, "Just let him go in the exams." And they said, "He's
definitely gonna fail, it's a waste of time, waste of money." And he said, "Well, what if we paid?" So they said, "Well, okay." So he didn't actually have to pay. He only had to pay if I failed. So I took the exams and
passed them, fortunately. I didn't get the top grades but I, you know, I got into A levels. But then that also kind of limited what I could do at A levels. I wasn't allowed to do A-Level maths. - What do you mean you weren't allowed to? - Because I had such a bad
math grade from my GCSEs, I only had a C. But they wouldn't let me
go into the ABC for maths 'cause of some kind of
coursework requirement back then. The top grade I coulda got was a C. So C, D, or E. So I got a C. And they let me do kind of AS-level maths, which is this half intermediate, but I didn't get go university. But in the end I liked chemistry, I had a good chemistry teacher. So in the end, I got to
university to do chemistry. - So through that kind
of process, I think, for kids in that situation, it's easy to start
believing that you're not, well, how do I put it? That you're stupid, and basically give up. That you're just not good at math, you're not good at school. So this is by way of advice for people, for interesting people, for interesting young kids right now, experiencing the same thing. What was the source of
you not giving up there? - I have no idea other than I really liked not understanding stuff. For me, when I not understand something- (chuckles) I feel like I
don't understand anything now. But back then, I remember
when I was like, I don't know. I tried to build a laser
when I was, like, eight. And I thought, how hard could it be? And basically I was
gonna build a CO2 laser. I was like, "Right, I think I need some partially coated mirrors, I need some carbon dioxide, and I need a high voltage. And I was so stupid, right? I was kind of so embarrassed. I had to make enough CO2. I actually set a fire and
tried to filter the flame- - Oh, nice.
- To trap enough CO2. And I was like, it completely failed and I burnt half the the garage down. So my parents were not
very happy about that. So that was one thing. I was like, I really like
first principle thinking, and so, you know? So I remember being super curious and being determined to find answers. And so, when people do
give advice about this, why I ask for advice about this, I don't really have that much advice other than don't give up. And one of the things I tried to do as a chemistry professor in my group, is I hire people that
I think that, you know- If they're persistent enough, who am I to deny them the chance? Because, you know, people gave me a chance and I was able to do stuff. - Do you believe in yourself essentially? - So I love being around smart people and I love confusing smart people. And when I'm confusing smart people, and, you know, not by
stealing their wallets and hiding it somewhere, but if I can confuse smart people, that is the one piece of hope that I might be doing
something interesting. - Well, that's quite brilliant. Like as a gradient to optimize. Hang out with smart
people and confuse them. And the more confusing it is, the more there's something there. - And as long as they're not telling you you're just a complete idiot, and they give you different reasons. 'Cause like with assembly theory, when people said, "Oh, it's wrong." And I was like, "Why?" And no one could gimme
a consistent reason. They said, "Oh, because
it's been done before," or "it's just Kolmogorov," or "it's just there, that, and the other." So I think the thing
that I like to do is in- And in academia, it's hard, right? 'Cause people are critical. But I mean, you know, the criticism- I mean, although I got kind
of upset about it earlier, which is kind of silly, but not silly, because obviously it's
hard work being on your own or with a team, spatially separated, like during lockdown, and
try to keep everyone on board and, you know, have some faith. I always wanted to have a new idea. And so, you know, I like a new idea and I wanna nurture it
as long as possible. And if someone can give
me actionable criticism. That's what I think I
was trying to say earlier when I was kind of like stuck for words. Give me actionable criticism. You know, it's wrong. Okay, why is it wrong? Say, oh, your equation's
incorrect for this, or your method is wrong. And so what I try and do
is get enough criticism from people to then
triangulate and go back. And I've been very fortunate in my life that I've got great colleagues, great collaborators, funders, mentors and people that will take the time to say, you're wrong because. And then what I have to do
is integrate the wrongness and go, aah, cool, maybe I can fix that. And I think criticism is really good. People have a go at me
'cause I'm really critical. I'm like, But I'm not criticizing, you know, you as a person, I'm just criticizing the idea and trying to make it better and say, "Well, what about
this," and, you know? And sometimes I'm kind of, you know, my filters are kind of,
you know, truncated. And in some ways, I'm just like, "That's wrong, that's wrong, that's wrong, why'd you do this?" And people are like, "Oh
my God, you just told me. You destroyed my life's work." I'm like, "Relax, no." I'm just like, "Let's make it better." And I think that we don't do that enough 'cause we're either personally critical, which isn't helpful, or we don't give any criticism at all because we're too scared. - Yeah. I've seen you be pretty
aggressively critical, but every time I've seen it, it's the idea, not the person. - I'm sure I make mistakes and that. I mean, you know, I argue with lots- I mean, I argue lots with Sara, and she was like kinda shocked. I've argued with Joscha
in the past, Joscha Bach, and he's like, "You're
just making that up." I'm like, "No, not quite, but kind of." You know, I had a big
argument with Sara about time, and she's like, "No, time doesn't exist." I'm like, "No, no, time does exist." And as she realized that her
conception of assembly theory and my conception of assembly
theory were the same thing, necessitated us to abandon
the fact that time is eternal. To actually really fundamentally question how the universe produces
combinatorial novelty. - So time is fundamental
for assembly theory? 'Cause I'm just trying to figure out where you and Sara converged. - I think assembly theory is
fine in this time right now, but I think it helps us understand that something interesting is going on. And I'm been really inspired
by a guy called Nick Gisin. I'm gonna butcher his argument, but I love his argument a lot. So I hope he forgives
me if he hears about it. But basically, if you want free will, time has to be fundamental. And if you want time to be fundamental, you have to give up on
platonic mathematics and you have to use
intuition as mathematics. And again, I'm gonna butcher this. But basically, Hilbert
said that, you know, "Infinite numbers are allowed." And I think it was Brouwer said, "No, you can't, all numbers are finite." So let's go back a step, 'cause it was like people were gonna say, assembly theory seems to explain that large combinatorial space allows you to produce things
like life and technology. And that large combinatorial
space is so big it's not even accessible to a Sean Carroll or David Deutsch multiverse. The physicist saying that all of the universe already
exists in time is probably, provably, that's a
strong word, not correct. That we are gonna know that
the universe as it stands, the present, the way the
present builds the future so big the universe can't
ever contain the future. And this is a really interesting thing. I think Max Tegmark has
this mathematical universe where he says, "You know, the universe is kind of like a block universe." And I apologize to Max
if I'm getting it wrong. You have the initial conditions and you can run the
universe right to the end and go backwards and
forwards in that universe. That is not correct. - Yeah, let me load that in. The universe is not big
enough to contain the future. - [Lee] Yeah, that's it. - So that's another
beautiful way of saying that time is fundamental. - Yes. This is why the law of
the excluded middle, something is true or false, only works in the past. Is it gonna snow in New York
next week, or in Austin? You might, in Austin, say, probably not. In New York, you might say, yeah. If you go forward to next week and say, did it snow in New York
last week, true or false? You can answer that question. The fact that the law
of the excluded middle cannot apply to the future
explains why time is fundamental. - Well, I mean that's a good
example, intuitive example. But it is possible that we
might be able to predict, you know, whether it's gonna snow if we had the perfect information. You're saying it'd not. - Impossible. Impossible. So here's why. I'll make a really quick argument. And this argument isn't mine, it's Nick's and a few other people. - [Lex] Can you explain his
view on time being fundamental? - Yeah, so I'll give my view, which kind of resonates with his. But basically, it's very simple actually. It would say that your ability to design and do an experiment is
exercising free will. So he used that thought process. But I never really
thought about it that way. And that you actively make decisions. I used to think that free will was a kind of consequence
of just selection, but I'm kind of understanding
that human free will is something really interesting. And he very much inspired me. But a thing what Sara Walker said that inspired me as well. And these will converge. Is that I think that the universe- The universe is very big, huge. But actually, the place that is largest in the universe right now, the largest place in
the universe is earth. - Yeah. I've seen you say that, and boy, that's an interesting one to process. What do you mean by that? Earth is the biggest
place in the universe. - Because we have this
combinatorial scaffolding going all the way back from LUCA. So you've got cells
that can self-replicate. And then you go all the way
to terraforming the earth. You've got all these architectures, the amount of selection that's going on, biological selection, just to be clear, biological evolution. And then you have multicellularity. Then animals, and abstraction. And with abstraction, there was another kick because you can then build architectures, and computers, and cultures, and language. And these things are the
biggest things that exist in the universe because we
can just build architectures that couldn't naturally arise anywhere. And the further that
distance goes in time, and this kind of is gigantic and- - [Lex] From a complexity perspective. - Yeah.
- Okay, wait a minute. I mean, I know you're being poetic, but how do you know there's
not other earth-like- You're basically saying
earth is really special. It's awesome stuff. As far as we look out, there's nothing like it going on. But how do you know there's
not a nearly infinite number of places where cool stuff
like this is going on? - I agree. I'll say again that earth
is the most gigantic thing we know in the universe. Combinatorially, we know. - [Lex] We know. Okay, yeah. - Now, I guess, this
is just purely a guess, I have no data, but other than hope. Well, maybe not hope. No, I have some data. That every star in the
sky probably has planets and life is probably
emerging on these planets. But the amount of contingency
that has associated with life is that I think the
combinatorial space associated with these planets is so different our causal cones are never
gonna overlap, or not easily. And this is a thing that
makes me sad about alien life. That's why we have to create alien life in the lab as quickly as possible, because I don't know
if we are gonna be able to build architectures that will intersect with alien intelligence architectures. - And intersect, you don't
mean in time or space? - [Lee] Time and the
ability to communicate. - So the ability to communicate. - Yeah. My biggest fear, in a way,
is that life is everywhere, but we've become infinitely more lonely because there are scaffolding
in that combinatorial space, because it's so big. - So you're saying the constraints created by the environment that led to the factory of Darwinian evolution are
just a little, tiny cone in a nearly infinite
combinatorial space, so there's other cones like it.
- Exactly. - Why can't we communicate with other- Just because we can't create it doesn't mean we can't
appreciate the creation, right? Sorry, detect the creation. - I truly don't know, but it's an excuse for
me to ask for people to give me money to
make a planet simulator. - Yeah, right. With a different kind of- - I'm just like another
shameless scientist, it's like gimme money, I need to play. - This was all a long plugged
for a planet simulator. Hey, I will be the
first in line to donate. - My Rick garage has run
out of room, you know? - [Lex] Yeah. - No- - And this is a planet simulator. You mean, like a different kind of planet or different set sets of
environments and pressures. - Exactly. If we could basically recreate
the selection before biology, as we know it, that gives
rise to a different biology, we should be able to put the constraints on where to look in the universe. So here's the thing, here's my dream. My dream is that by
creating life in the lab, based upon constraints we understand. So let's like go for Venus-type life or earth-type life or something again. Do Earth 2.0. Screw it, let's do Earth 2.0. An Earth 2.0 has a
different genetic alphabet. Fine. That's fine. Different protein alphabet. Fine. Have cells and the
evolution all that stuff. We will then be able to say, okay, life is a more general phenomena. Selection is more general
than what we think is the chemical constraints on life. And we can point the James Webb and other telescopes at other planets. That we are in that zone, we are most likely to
combinatorially overlap with, right? So there are chemistry- - [Lex] You're looking for some overlap. - And then, we can then
basically shine light on them, literally, and might look
at light coming back, and apply advanced assembly theory to general theory of
language, that we'll get, and say, ha, in that signal, it looks random but there's a copy number. Oh, this random set of things that looks like a true and random number generator has structure as not Kolmogorov, AIT-type structure, but
evolutionary structure, given by assembly theory. But I would say that, 'cause I'm a shameless assembly theorist. - Yeah. It just feels like the cone, I might be misusing the word cone here, but the width of the cone
is growing really fast to where eventually all the cones overlap, even in a very, very, very
large combinatorial space. But, then again, if
you're saying the universe is also growing very quickly
in terms of possibilities. - I hope that as we build abstractions- I mean, one idea is that
as we go to intelligence, intelligence allows us to
look at the regularities around us in the universe, and that gives us some common grounding to discuss with aliens. And you might be right. That we will overlap there, even though we have completely
different chemistry, literally completely different chemistry, that we will be able pass
information from one another. But it's not a given. And, you know, I have to kind of try and divorce hope
and emotion, you know, away from what I can logically justify. - But it's just hard to
intuit a world, a universe, where there's nearly
infinite complexity objects and they somehow can't detect each other. - But the universe is expanding. But the nice thing is I would say- You see, I think Carl Sagan did the wrong, well, not the wrong thing, he flicked the Voyager probe
down at a pale, blue dot, said, "Look at how big the universe is." I would've done it the
other way around and said, "Look at the Voyager probe that
came from the planet earth, that came from LUCA, look at how big earth is." - [Lex] They produced that. - It produced that. And I think it's like, completely amazing. And then that should allow people on earth to think about, well,
probably we should try and get causal chains off earth onto Mars, onto the moon, wherever. Whether it's human life or
Martian life that we create, it doesn't matter. But I think this
combinatorial space tells us something very important
about the universe. And that I realized in assembly theory, that the universe is too
big to contain itself. And now coming back, and I want to kind of
change your mind about time, 'cause I'm guessing that your
time is just a coordinate. - Yeah.
- So I'm gonna change- - [Lex] I'm guessing you
are one of those, yeah. - One of those. I'm gonna change your mind in real time, or at least attempt. - Oh, in real time. There you go. I already got the tattoo, so this is gonna be embarrassing
if you change my mind. - [Lee] But you can just add an arrow of time onto it, right? - Yeah, true. Just modify it. - Or erase it a bit.
- Yeah. - And the argument that I think that is really most interesting is like, people say the
initial conditions specify the future of the universe. Okay, fine, let's say that's
the case for a moment. Now let's go back to Newtonian mechanics. Now, the uncertainty principle in Newtonian mechanics is this. If I give you the coordinates of an object moving in space and the coordinates of another object, and they collide in space, and you know those initial conditions, you should know exactly
what's gonna happen. However, you cannot
specify these coordinates to infinite precision. Now, everyone said, "Oh,
this is kind of like, you know, the chaos theory argument." No, no, it's deeper than that. Here's the problem with numbers. This is where Hilbert
and Brouwer fell out. To have the coordinates of this object, an given object as it's colliding, you have to have them
to infinite precision. That's what Hilbert says. He says, "No problem,
infinite precision is fine." Let's just take that for granted. But when the object is finite and it can't store its own coordinates, what do you do? - [Lex] Mm-hmm. - So in principle, if a finite
object cannot be specified to infinite precision, in principle, the initial conditions don't apply. - Well, how do you it can't store its- - Well, how would you store
an infinitely long number in a finite size? - Well, we're using
infinity very loosely here. - [Lee] No, no, we're using- - Infinite precision. I mean, not loosely, but- - Very precisely.
- So you think infinite precision is required? - Well, let's take the object. Let's say the object is a golf ball. A golf ball is a few
centimeters in diameter, we can work out how many
atoms are on the golf ball. And let's say we can store numbers down to atomic dislocations. So we can work out how many atoms there are in the golf ball, and we can store the coordinates in that golf ball down to that number. But beyond that, we can't. Let's make the golf ball smaller. And this is where I think
that we get randomness in quantum mechanics. And some people say you
can't get randomness in quantum mechanics, is deterministic. But, aha, this is where we realize that classical mechanics
and quantum mechanics suffer from the same uncertainty principle. And that is the inability to specify in the initial conditions
to a precise enough degree to give you determinism. The universe is intrinsically too big. And that's why time exists. It's nondeterministic. Looking back into the past, you can use logical arguments, because you can say, was it true or false? You wouldn't know. But the fact we are unable
to predict the future with the precision is not
evidence of lack of knowledge, it's evidence the universe
is generating new things. - Okay, first of all, quantum mechanics, you can just say statistically
what's gonna happen when two golf balls hit each other. - Statistically. Sure, I can say statistically
what's gonna happen. But then when they do happen, and you keep nesting it together. I mean, it goes almost back to- Let's think about entropy in the universe. How do we understand
entropy change or a process? We can use the ergodic hypothesis. We can also have the counterfactuals, where we have all the different states. And we can even put that
in the multiverse, right? But both those, they're nonphysical. The multiverse kind of collapses back to the same problem, about the precision. If you accept you don't have to have true and false going
forward into the future, the real numbers are real. They're observables. - I'm trying to see exactly where time being fundamental sneaks in. The golf ball can't contain its own position perfectly precisely, how that leads to time
needing to be fundamental. - Do you believe or do you
accept you have free will? - Yeah, I think at this moment in time, I believe that I have free will. - So then you have to believe
that time is fundamental. - I understand that's the
statement you've made. - Well, no, that we can logically follow. It's because if you don't have free will. So like, if you're in a
universe that has no time, the universe is deterministic. If it's deterministic,
then you have no free will. - I think the space of
how much we don't know is so vast that saying the
universe is deterministic and from that jumping
to there's no free will, it's just too difficult of a leap. - No, it logically follows. No, no, I totally disagree. I mean, it's deep and it's important, all I'm saying, and it's actually different
to what I've said before, is that if you don't require
Platonistic mathematics and accept that nondeterminism
is how the universe looks, and that gives us our creativity in the way the universe
is getting novelty, it's kind of really deeply
important in assembly theory. 'Cause assembly theory starts to actually give you
a mechanism why you go from boring time, which is basically initial
conditions specify everything, to a mismatch in creative time. And I hope we'll do experiments. And I think it's really important to- I would love to do an experiment that proves that time is fundamental and the universe is generating novelty. I don't know all the features
of that experiment yet, but by, you know, having
these conversations openly and getting people to think
about the problems in a new way, better people, more intelligent people with good mathematical
backgrounds can say, oh, hey, I've got an idea. I would love to do an experiment that shows that the universe- I mean, universe is too big for itself going forward in time. And, you know, this is why I really hate the idea of the Boltzmann brain. The Boltzmann brain makes
me super, kind of like, you know, everyone's having a free lunch. It's like, well, let's break
all the laws of physics. So a Boltzmann brain is this idea that in a long enough universe the brain will just emerge
in the universe as conscious. And that neglects the causal chain of evolution required
to produce that brain. And this is where the
computational argument really falls down, because the computationalist can say, I can calculate probability
of a Boltzmann brain, and they'll give you a probability. But I can calculate probability
of a Boltzmann brain. Zero. - Just because the space
of possibility is so large? - Yeah, it's like when we
start fooling ourselves with numbers that we
can't actually measure and we can't ever conceive of, I think it doesn't give
us a good explanation. I want to explain why
life is in the universe. I think life is actually
the novelty miner. I mean, life basically mines novelty almost from the future and
actualizes in the present. - Okay. Life is a novelty miner from the future that is actualized in the present. - Yep. I think so.
- Novelty miner. First of all, novelty. What's the origin of novelty? When you go from boring time to creative time, where's that? Is it as simple as randomness, like you're referring to? - I'm really struggling with randomness because I had a really good argument with Joscha Bach about randomness. And he says that randomness
doesn't give you free will. That's insane, 'cause
you'd just be random. And I think he's right at that level, but I don't think he is
right on another level. And it's not about randomness, it's about constrained. I'm gonna sound like- I'm making this up as I go along. So making this up. Constrained opportunity. So the novelty. What is novelty? You know, this is why I
think it's a funny thing. Well, if you ever wanna discuss AI. Why I think everyone's kind of gone AI mad is that they're misunderstanding novelty. But let's think about novelty. Yes, what is novelty? So I think novelty is a
genuinely new configuration that is not predicted by the past, right? And that you discover
in the present, right? And that is truly different, right? Some people say that
novelty doesn't exist, it's always with precedent. I want to do experiments that show that that is not the case. And it goes back to a question you asked me a few moments ago, which is where is the factory? - Yeah.
- Right? Because I think the same mechanism that gives us a factory gives us novelty. And I think that is why I'm
so deeply hung up on time. I mean, of course, I'm wrong, but how wrong? And I think that life opens
up that combinatorial space in a way that our current laws of physics, as contrived in a deterministic
initial condition universe, even with the get-out-of-the-multiverse
David Deutsch-style, which I love, by the way, but I don't think is correct. But it's really beautiful. - [Lex] The multiverse? - The David Deutsch's
conception of the multiverse is kind of like given. But I think that the problem
with wave-particle duality in quantum mechanics is
not about the multiverse, it's about understanding
how determined the past is. Well, I don't just think that. Actually, this is a
discussion I was having with Sara about that, right? Where she was like, "Oh, I think-" We've been debating this
for a long time now, about how do we reconcile novelty, determinism, indeterminism? - So, okay, just to clarify. You both, you and Sara think the universe is not deterministic? - I won't speak for Sara, but I roughly can. I think the universe is
deterministic looking back in the past, but undetermined
going forward in the future. So I'm kinda having my
cake and eating it here. This is because I fundamentally don't understand randomness, right? As Joscha told me or other people told me. But if I adopt a new view now. The new view is the universe
is just nondeterministic. But I'd like to refine that and say the universe appears deterministic
going back in the past, but it's undetermined going
forward in the future. So how can we have a universe that has deterministically looking rules that's non determined
going into the future? It's breakdown in precision
in the initial conditions. And we have to just stop
using initial conditions and start looking at trajectories and how the combinatorial space behaves in the expanding universe
in time and space. And assembly theory helps us quantify the transition to biology, and biology appears to be novelty mining, 'cause it's making crazy stuff. You know, that we are
unique to earth, right? That there are objects
on earth that are unique to earth that will not
be found anywhere else, 'cause you can do the combinatorial math. - What was that statement
you made about life is novelty mining from the future? - [Lee] Yeah. - What's the little element of
time that you're introducing? - So what I'm kinda meaning is 'cause the future is
bigger than the present, in a deterministic universe, how do the states go from one to another? I mean, there's a mismatch, right? - [Lex] Mm-hmm. Yeah. - So that must mean that
you have a little bit of indeterminism,
whether that's randomness or something else, I don't understand. I want to do experiments
to formulate a theory to refine that as we go forward that might help us explain that. And I think that's why I am so determined to try and crack the
nonlife to life transition looking at networks and molecules and that might help us
think about the mechanism. But certainly, the future
is bigger than the past, in my conception of the universe and some conception of the universe. - By the way, that's not obvious, right? The future being bigger than the past. Well, that's one statement. And the statement that the
universe is not big enough to contain the future
is another statement. - [Lee] Yeah. - That one is a big one. That one's a really big one. - I think so. But I think it's entirely- Because look, we have the second law. And right now, I mean, we
don't need the second law if the future's bigger than the past. It follows naturally.
- Right. - So why are we retrofitting
all these sticking plasters onto our reality to hold
onto a timeless universe? - Yeah, but that's because
it's kind of difficult to imagine the universe that
can't contain the future. - [Lee] But isn't that really exciting? - It's very exciting,
but it's (laughs) hard. I mean, we're humans on earth, and we have a very kinda
four-dimensional conception of the world, of 3D plus time. It's just hard to intuit a world where- What does it even mean? A universe that can't contain the future? - Yeah, it's kinda crazy but obvious. - It's weird. I mean, I suppose it sounds
obvious, yeah, if it's true. - So the reason why
assembly theory turned me onto that was that- Let's just start in the present and look at all the complex molecules and go backwards in time and understand how evolutionary processes gave rise to them. It's not at all obvious. The Taxol, which is one of the most complex natural
products produced by biology, was gonna be invented by biology. It's an accident. You know, Taxol is unique to earth. There's no Taxol
elsewhere in the universe. And Taxol was not decided
by the initial conditions. It was decided by this
interplay between the- So the past simply is
embedded in the present, it gives some features. But why the past doesn't map
to the future, one-to-one, is because the universe is
too big to contain itself. That gives space for creativity, novelty, and some things
which are unpredictable. - Okay, so given that you're disrespecting the power of the initial conditions, let me ask you about- So how do you explain
that cellular automata are able to produce such
incredible complexity given just basic rules and
basic initial conditions? - I think that this falls
into the Brouwer-Hilbert trap. So how do you get a cellular automata to produce complexity? You have a computer,
you generate a display, and you map the change of that in time. There are some CAs repeat, like functions. It's fascinating to me that for pi, there is a formula where you can go to the millionth decimal place of pi and read out the number
without having to go there. But there are some numbers
where you can't do that. You have to just crank through. Whether it's Wolframian
computational irreducibility or some other thing. Well, it doesn't matter. But these CAs, that complexity, is that just complexity, or a number that is
basically you're mining that number in time? You know, is that just a display screen for that number, that function? - Well, can't you say the same thing about the complexity on earth, then? - No, because the complexity on earth has a copy number and an assembly
index associated with it. That CA is just a number running. - You don't think it has a copy number? Wait, wait a minute. - Well, it does where we're looking at humans producing different rules, but then it's nested on selection. So those CAs are produced by selection. I mean, the CA is such a fascinating pseudo-complexity generator. What I would love to do is quantify the degree of surprise in a
CA and run it long enough. But what I guess that means is we have to instantiate, we have to have a number
of this experiments where we're generating different rules and running them time-steps. Ah, I got it. CAs are mining novelty, you know, in the future by iteration, right? And you're like, oh,
that's great, that's great. You didn't predict it. Some rules you can predict
what's gonna happen, other rules you can't. So for me, if anything, CAs are evidence that the universe is too
big to contain itself. 'Cause otherwise, you'd
know what the rules are gonna do forever more. - Right. I guess you were saying
that the physicist saying that all you need is
the initial conditions and the rules of physics is somehow missing the bigger picture. - Yeah.
- And, you know, if you look at CAs, all you need is the initial condition and the rules. and then run the thing. - You need three things. You need the initial conditions, you need the rules, and you need time
iteration to mine it out. Without the coordinate,
you can't get it out. - Sure. And that is fundamental. - And you can't predict it
from the initial conditions. If you could, then it'd be fine. - [Lex] And that time is- - A resource.
- Like the foundation of the history, the memory of each of the things it created. It has to have that memory of all the things that led up to it. - Yeah, you have to have the resource. - [Lex] Yeah. - 'Cause time is a fundamental resource. Yeah, I think I had a major
epiphany about randomness. But I keep doing that every two days, and then it goes away again. It's random. - You're a time fundamentalist. - You should be as well. If you believe in free will, the only conclusion is
time is fundamental. Otherwise, you cannot have free will. It logically follows. - Well, the foundation
of my belief in free will is just observation-driven. I think if you use logic, it's like logically it seems like the universe is deterministic. - Looking backward in time. And that's correct, the universe is. - And then everything
else is a kinda a leap. It requires a leap. - This is why I think machine learning is gonna provide a chunk of that, right? To help us explain this. So the way I'd say, if you take- - That's interesting. Why? - 'Cause the AI doomers
are driving me mad. And we don't have any
intelligence yet I call AI, autonomous informatics
just to make people grumpy. - Yeah. You're saying we're
quite far away from AGI. - I think that we have no
conception of intelligence and I think that we don't understand how the human brain does what it does. I think that neuroscience
is making great advances, but I think that we
have no idea about AGI. So I am a technological,
I guess, optimist. I believe we should do everything. The whole regulation of AI is nonsensical. I mean, why would you regulate Excel other than the fact that
Clippy should come back? And I love Excel '97 'cause we can play, you know, we can do the flight simulator. - I'm sorry. In Excel? - Yeah, have you not played
the flight simulator in- - In Excel '97?
- Yeah. - What does that look like? - It's like a wireframe. Very, very basic. But basically, I think it's
X, zero, Y, zero, Shift, and it opens up and you can
play the fight simulator. - [Lex] Oh, wow. Wait, wait. Is he using Excel? - [Lee] Excel. Excel '97. - [Lex] Okay. - I resurrected it the other
day and saw Clippy again for the first time in a long time. - Well, Clippy is definitely coming back. But you're saying we don't
have a great understanding of what is intelligence? What is the intelligence
underpinning the human mind? - I'm very frustrated by the way that we're AI dooming right now. And people were bestowing
some kind of magic. Now, let's go back a bit. So you said about AGI. Are we far away from AGI? Yes. I do not think we are gonna
get to AGI anytime soon. I've seen no evidence of it. And the AI doom scenario is
nonsensical in the extreme. - [Lex] Yeah. - And the reason why I
think it's nonsensical. And I don't think there
isn't things we should do and be very worried about, right? I mean, there are things we
need to worry about right now, what AI are doing. Whether it's fake data, fake users, right? I want authentic people or authentic data. I don't want everything to be faked, and I think it's a really big problem. And I absolutely want to go on the record to say I really worry about that. What I'm not worried about is
that some fictitious entity is going to turn us all to paperclips or detonate nuclear bombs. I don't know. Maybe. I don't know. Anything you can't think of. Why is this? And I'll take a very simple
series of logical arguments. The AI doomers, they do not
have the correct episdemiology. They do not understand what knowledge is. And until we understand what knowledge is, they're not gonna get anywhere because they're applying things falsely. So let me give you a very simple argument. People talk about the
probability, P-doom, of AI. We can work out the probability of a asteroid hitting the planet. Why? 'Cause it's happened before. We know the mechanism. We know that there's a gravity well, or that, you know, space-time is bent and stuff falls in. We don't know the probability of AGI because we have no mechanism. So let me give you another one. Which is like, I'm
really worried about AG. What's AG? AG is antigravity. One day we could wake up
and antigravity, you know, is discovered, we're all gonna die. The atmosphere's gonna float away. We are gonna float away. We're all doomed. What is the probability of AG? We don't know because
there's no mechanism for AG. Do we worry about it? No. And I don't understand the current reason for certain people in certain areas to be generating this nonsense. I think they're not doing it maliciously. I think we're observing the
emergence of new religions, how religions come. Because religions are
about kind of some control. So you've got the optimist
saying AI's gonna cure us all and AI's gonna kill us all. What's the reality? Well, we don't have AI, we have really powerful
machine learning tools and they will allow us
to do interesting things. And we need to be careful
about how we use those tools in terms of manipulating human beings and faking stuff, right?
- Right. Well, let me try to sort of (indistinct) in the AI doomers argument. And actually I don't know. Are AI doomers in the
Yudkowsky camp saying it's definitely gonna kill us? 'Cause there's a spectrum. - [Lee] 95% I think is that limit, yeah. - And 95% plus, that's the- - No, no, not plus. I dunno, I was seeing on
Twitter today various things. But I think Yudkowsky's is at 95%. - But to belong to the AI doomer club, is there a threshold? I don't know what the membership- - Maybe.
- And what are the fees? - Well, I think Scott Aaronson, like, I was quite surprised. I saw this online, so it could be wrong, so sorry if it's wrong, says 2%. But the thing is, if someone said there's a 2% chance you're
gonna die going into the lift, would you go into the lift? - In the elevator
- Yeah, elevator. - For the American
English speaking audience. Well, no, not for the elevator. - So I would say anyone higher than 2%. I mean, I think there's
a 0% chance of AGI doom. Zero. - Just to push back on the on the argument where at the end of zero on the AGI. We can see on earth that
there's increasing levels of intelligence of organisms. We can see what humans
with extra intelligence were able to do to the other species. So that is a lot of samples of data what a delta in intelligence gives you. When you have an increase in intelligence, how you're able to dominate
a species on earth. And so the idea there is that if you have a being that's 10x smarter than humans, we are not gonna be able to predict what that being is going to be able to do, especially if it has the
power to hurt humans. You can imagine a lot of trajectories in which the more benefit AI systems give the more control we
give to those AI systems over our power grid,
over our nuclear weapons or weapons of any sort. And then it's hard to know what a ultra intelligence system would be able to do in that case. You don't find that convincing? - I think I would fail that argument 100%. Here's a number of reasons to fail it on. First of all, we dunno where
the intention comes from. The problem is that people think. You know, because I've been watching all the hucksters online
with the prompt engineering and all this stuff. When I talk to a typical
AI computer scientist, they keep talking about the AI as having some kind of
decision-making ability. That is a category error. The decision-making ability
comes from human beings. We have no understanding of
how humans make decision. We've just been discussing free will for the last half an hour, right? We don't even know what that is. So the intention. I totally agree with you, people who intend to do bad
things can do bad things and we should not let that risk go. That's totally here and now. I do not want that to happen, and I'm happy to be regulated to make sure that systems I generate, whether they're like computer systems or- You know, I'm working on a new
project called ChemMachina. - (chuckles) Nice. Well done.
- Yeah, yeah. Which is basically a- - (laughing) For people who
don't understand the point, the "Ex Machina" is a great film about, I guess, AGI embodied. And chem is the chemistry version of that. - And I only know one way
to embody intelligence, that's in chemistry in human brains. So, category area number
one is they have agency. Category area number two
is saying that assuming that anything we make is
gonna be more intelligent. Now you didn't say superintelligent, I'll put the words into our mouths here. Here's superintelligent. I think that there is no reason to expect that we are gonna make systems that are more intelligent, more capable. You know, when people play chess computers they don't expect to win now, right? The chess computer is very good at chess, that doesn't mean it's superintelligent. So I think that superintelligence- Well, I mean, I think even Nick Bostrom is pulling back on this now. So I see this a lot. When did I see it first happen? Eric Drexler, nanotechnology,
atomically precise machines. He came up with a world where we had these atom cogs everywhere, we're gonna make
self-replicating nanobots. Not possible. Why? Because there's no resources to build these self-replicating nanobots. You can't get the precision. It doesn't work. It was a major category error in taking engineering principles down to the molecular level. The only functioning
molecular technology we know- No, sorry. The only functioning
nano molecular technology we know produced by evolution. There. So now let's go forward to AGI. What is AGI? We dunno. It's super, it can do this, or humans can't think. I would argue the only AGIs
that exist in the universe are produced by evolution. And sure, we may be able to
make our working memory better, we might be able to do more things. The human brain is the
most compact computing unit in the universe. It uses 20 watts. It a uses a really limited volume. It's not like a ChatGPT cluster which has to have thousands of watts, a model that's generated, and it has to be
corrected by human beings. You are autonomous and
embodied intelligence. So I think that there are so many levels that we're missing out. We've just kinda went,
oh, we've discovered fire, oh gosh, the planet's just
gonna burn one day, randomly. I mean, I just don't understand that leap. There are bigger problems
we need to worry about. So what is the motivation? Why are these people, let's assume they have their earnest, have this conviction? Well, I think it's kind
of they're making leaps, they're trapped in a virtual
reality that isn't reality. - Well, I mean I can continue
a set of arguments here. But also, it is true that ideologies that fearmonger are dangerous. Because you can then use it to control, to regulate in a way that halts progress, to control people, and to cancel people, all that kinda stuff. So you have to be careful, because the reason ultimately wins, right? But there is a lotta concerns
with superintelligent systems, very capable systems. I think when you hear the
word superintelligent, you're hearing like,
it's smarter than humans in every way that humans are smart. But the paperclip manufacturing system doesn't need to be smart in every way, it just needs to be
smart in specific ways. And the more capable
the AI systems become, the more you could see us
giving them control over, like I said, our power grid, a lot of aspects of human life. And then that means they'll be able to do more and more damage when there's unintended
consequences that come to life. - I think that that's right, the unintended consequences
we have to think about. And that I fully agree with. But let's go back a bit. Sentient. I mean, again, I'm far
away from my comfort zone in all this stuff, but,
hey, let's talk about it 'cause I'll give myself a qualification. - Yeah, we're both qualified
and sentience, I think, as much as anyone else. - I think the paperclip scenario
is just such a poor one. Because let's think about
how that would happen. And also, let's think about we are being so unrealistic about how
much of the earth's surface we have commandeered. You know, for paperclip
manufacturing to really happen, I mean, do the math. It's not gonna happen. There's not enough energy. There's not enough resource. Where is it all gonna come from? I think that what happens
in evolution is really. Why has a killer virus not
killed all life on earth? Well, what happens is, sure, super killer viruses that kill the ribosome have emerged. But you know what happens? They nuke a small space
because they can't propagate, they all die. So there's this interplay
between evolution and propagation, right? And death. And so- - In evolution? You don't think it's possible
to engineer, for example, sorry to interrupt, but
like a perfect virus- - No
- That's deadly enough. - No, it's nonsensical. - Okay.
- I think that just, again, it wouldn't work,
'cause if it was too deadly, it would just kill the
radius and not replicate it. - Yeah. I mean, but you don't think
it's possible to get a- (Lee stammering) Not kill all of life on
earth, but kill all humans? There's not many of us. There's only like, 8 billion. There's so much more ants. - I mean, I don't-
- So many more ants. And they're pretty smart. - The nice thing about where we are. I would love for the AI crowd to take a leaf out of the book of the biowarfare, chemical warfare crowd. I mean, not love, 'cause actually people have been killed with chemical weapons in the
First and Second World War, and bio weapons have been made, and, you know, we can argue about COVID-19 and all this stuff. Let's not go there just now. But I think there is a consensus that certain things are bad and we shouldn't do them, right? And sure, it would be
possible for a bad actor to engineer something bad, but we would see it coming and we would be able to
do something about it. Now, I guess what I'm trying to say is when people talk about doom and when you ask them for the mechanism, they just say, you know,
they just make something up. I mean, in this case, I'm Yann LeCun. I think he put out a very good point about trying to regulate jet engines before we've even invented them. And I think that's what I'm saying. I just don't understand why these guys are going round literally making stuff up about us all dying, when basically we need to
actually really focus on. Now, let's say that
some actors are earnest. Right, let's say Yudkowsky
is being earnest, right? And he really cares. But he loves it, he goes, he-he-he, and then you are all gonna die. It's like, you know, why don't
we try and do the same thing and say, you could do this, and then you're all gonna
be happy forever after. - Yeah.
- You know? - Well, I think there's
several things to say there. One, I think there is a role in society for people that say we're all gonna die, 'cause I think it filters
through as a message, as a viral message that gives us the proper amount of concern. - Okay. All right. - It's not 95%. But when you say 95%, and it filters through society, it'll give an average of
like .03%, an average. So it's nice to have people that are like, "we're all gonna die," then we'll have a proper concern. Like for example, I do believe
we're not properly concerned about the threat of
nuclear weapons currently. It just seems like people have forgotten that that's a thing. And, you know, there's a war in Ukraine with a nuclear power involved. There's nuclear powers
throughout the world. And it just feels like were on the brink of a potential world war to a percentage that I don't think people
are properly calibrating, like, in their head. We're all thinking it's a Twitter battle, as opposed to like actual threat. So like, it's nice to have
that kind of level of concern. But to me, like, when I hear AI doomers, what I'm imagining is, with
unintended consequences, a potential situation where, let's say, 5% of the world suffers deeply because of a mistake made
of unintended consequences. I don't wanna imagine the entirety of human civilization dying, but there could be a lot of suffering if this is done poorly. - I understand that, and I kind of, I guess, I mean, I'm involved
in the whole hype cycle. So what's happening right now is there seems to be- So let's say having some people
saying AI doom is a worry. Fine, let's give them that. But what seems to be happening is there seems to be people
who don't think AI is doom and they're trying to use
that to control regulation and to push people to regulate, which stops humans generating knowledge. And I am an advocate for generating as much knowledge as possible. When it comes to nuclear weapons, I grew up in the '70s and '80s where lot of adults really
had existential threat. Almost as bad as now with AI doom, they were really worried, right? There were some great, well, not great, there were some horrific documentaries. I think there's one called "Threads" that was generated in the UK, which was like, it was terrible. It was like so scary. And I think that the correct thing to do is obviously get rid of nuclear weapons. But let's think about
unintended consequences. We've got rid of- (Lee muttering) We got rid of all the sulfur particles in the atmosphere, right? All the soot. And what's happened in
the last couple of years is global warming has accelerated 'cause we've cleaned up
the atmosphere too much. So-
- Sure. I mean, the same thing if you
get rid of nuclear weapons, you get-
- Exactly, that's my point. So what we could do is if we actually started
to put the AI in charge, which I really like, an AI to be in charge of all world politics. And this just sounds ridiculous
for a second, hang on. But if we could all agree on- - [Lex] The AI doomers just woke up. - Yeah, yeah, yeah, yeah.
- On that statement. - But I really don't like politicians who are basically just
looking at local sampling. But if you could say globally, look, here's some game theory here. What is the minimum
number of nuclear weapons we need to distribute around the world to everybody to basically
reduce war to zero? - I mean, just the start
experiment of the United States, and China, and Russia, and major
nuclear powers get together and say, all right, we're gonna
distribute nuclear weapons to every single nation on earth. - [Lee] Yeah. (chuckles) Oh boy. I mean, that has a probably
greater than 50% chance of eliminating major military conflict. - Yeah.
- Yeah, but it's not 100%. - But I don't think anyone will use them. And look, what you've
gotta try and do is, like, for to qualify for these nuclear weapons- This is a great idea. The game theorists
could to do this, right? The question is this. I really buy your question, we have too many nukes. Just from a feeling point of view that we've got too many of them. So let's reduce the number,
but not get rid of them, because we'll have too
much conventional warfare. So then, what is the minimum
number of nuclear weapons we can just shoot around to remove- Humans hurting each other is
something we should stop doing? It's not out with our
conceptual capability. But right now, what about certain nations that are being exploited
for their natural resources in the future for a short-term gain because we don't wanna generate knowledge? And so, if everybody had
an equal doomsday switch, I predict the quality of
life for a average human will go up faster. I am an optimist, and
I believe that humanity is gonna get better and better and better, that we're gonna eliminate more problems. But I think, yeah, let's- - But the probability of a bad actor, of one of the nations
setting off a nuclear weapon. I mean, you have to integrate
that into the atmosphere. - But we just give you the
nukes like population, right? What we do is we- (laughing) I can't believe this. But anyway, let's just go there. So if a small nation with
a couple of nukes uses one because they're a bit bored or annoyed, the likelihood that they
are gonna be pummeled out of existence immediately is 100%. And yet, they've only
nuked one other city. I know this is crazy, and I apologize for- - Well, no, no. Just to be clear, we're just having a thought experiment. That's interesting, but, you know, there's terrorist organizations that would take that trade. We have to ask ourselves a question of which percentage of humans would be suicide bombers, essentially? Where they would sacrifice their own life because they hate another group of people. I believe it's a very small fraction, but is it large enough if
you give out nuclear weapons? - I can predict a future where
we take all nuclear material and we burn it for energy, right? 'Cause we're getting there. And the other thing you could do is say, look, there's a gap. So if we get all the countries to sign up to the virtual nuclear
agreement where we all exist, we have a simulation, where we can each other in the simulation and the economic consequences
are catastrophic. - Sure. In the simulation. I love it. It's not gonna kill all humans, it's just going to have
economic consequences. - [Lee] Yeah. I don't know, I just made it up. It seems like a cool idea.
- No, it's interesting. I mean, but it's interesting whether that would have as much
power on human psychology as actual physical nuclear exposure. - I think so.
- It's possible. But people don't take economic
consequences as seriously, I think, as actual nuclear weapons. - I think they do in Argentina,
and they do in Somalia, and they do in a lot
of these places where- No, I think this is a great idea. I'm a strong advocate now for- So what've we come up with? Burning all the nuclear
material to have energy. And before we do that, 'cause mad is good, mutually assured destruction
is very powerful, let's take it into the metaverse, and then get people to
kind of subscribe to that. And if they actually nuke each other, even for fun in the metaverse, there are dire consequences. - Yeah. Yeah. So it's like a video game. We all have to join this
metaverse video game, and then there's dire
economic consequences. And it's all run by AI, as you mentioned, so the AI doomers are really
terrified at this point. - No, they're happy, they have a job for another 20 years, right? - Oh, okay, fearmongering. - Yeah, yeah, yeah. I'm a believer in equal employment. - You've mentioned that,
what did you call it? ChemMachina?
- Yeah. - Yeah. So you've mentioned that a chemical brain is something you're interested in creating and that's a way to get conscious AI soon. Can you explain what a chemical brain is? - I wanna understand the
mechanism of intelligence as gone through evolution, right? 'Cause the way that intelligence
was produced by evolution appears to be the following. Origin of life. Multicellularity. Locomotion. Senses. Once you can start to see
things coming towards you and you can remember the past and interrogate the present and imagine the future, you can do something amazing, right? And I think only in recent years did humans become Turing complete, right? - [Lex] Yeah. - And so, that Turing completeness kinda gave us another kick up. But our ability to
process that information is produced in a wet brain. And I think that we do not have the correct hardware architectures to have the domain flexibility and the ability to integrate information. I think intelligence also comes at a massive compromise of data. Right now, we're obsessing about
getting more and more data, more and more processing, more and more tricks to get dopamine hits. So when we look back on this, going, oh yeah, that was really cool, 'cause when I asked ChatGPT, it made me feel really happy. I got a hit from it, but actually it just exposed
how little intelligence I use in every moment,
because I'm easily fooled. So what I would like to do is to say, well, hey, hang on, what
is it about the brain? So the brain has this
incredible connectivity and it has the ability to, you know, as I said earlier about
my nephew, you know, I went from Bill to Billy and
he went, "oh right, Leroy." Like, how did he make that leap? That he was able to basically,
without any training. I extended his name, that he doesn't like. He wants to be called Bill. He went back and said, "You
liked to be called Lee, I'm gonna call you Leroy." So human beings have a brilliant ability- Or, intelligent beings appear
to have a brilliant ability to integrate across all
domains all at once, and to synthesize something which allows us to generate knowledge. And become Turing complete on our own, although AI's a built-in
Turing complete things, their thinking is not Turing complete in that they are not able to
build universal explanations. And that lack of universal explanation means that they're just inductivist. Inductivism doesn't get you anywhere. It's just basically a party trick. I think it's in the "Fabric of
Reality" from David Deutsch, where basically, you know, the farmer is feeding
the chicken every day and the chicken's getting fat and happy. And the chicken's like, "I'm really happy every time the farmer
comes in and feeds me." And then, one day, the farmer comes in and instead of feeding the chicken, just wrings its neck, you know? And had the chicken had an
alternative understanding of why the farmer was feeding it. - It's interesting though, because we don't know what's special about the human mind
that's able to come up with these kinda generalities, this universal theories of things. And to come up with novelty. I can imagine, 'cause you
gave an example, you know, about William and Leroy. I feel like a example
like that we'll be able to see in future versions
of large language models. We'll be really, really,
really impressed by the humor, the insights, all of it. Because it's fundamentally trained on all the incredible humor and insights that's available out there
on the internet, right? I think we'll be impressed. - [Lee] Oh, I'm impressed. - Right, increasingly so. - But we're mining the past.
- Yes. - And what the human brain
appears to be able to do is mine the future.
- Yes. It's a novelty. It is interesting whether
these large language models will ever be able to come up with something truly novel. - I can show on the
back of a piece of paper why that's impossible. And it's like, the problem is that, and again, these are domain experts kind of bullshitting each other. The term generative.
- Yes. - Right? Average person thinks,
oh, it's generative. No, no, no. If I take the numbers
between zero and 1000 and I train a model to
pick out the prime numbers by giving all the prime
numbers between zero and 1000. It doesn't know what a prime number is. Occasionally, if I can cheat a bit, it will start to guess. It never will produce
anything out with the dataset because you mine the past. The thing that I'm getting to is I think that actually current
machine learning technologies might actually help reveal
why time is fundamental. It's like kind of insane. Because they tell you about
what's happened in the past, but they can never help you
understand what's happening in the future without training examples. Sure, if that thing
happens again, it's like- So let's think about what large
language models are doing. We have all the internet as we
know it, you know, language. But also, they're doing something else. We're having human beings
correcting it all the time. Those models are being corrected. - Steered. - Corrected. Modified. Tweaked. - [Lex] Well yeah, but I mean- - Cheating. - (chuckles) Well, you
could say the training on human data in the first
place is cheating but- - The human is in the
loop. Sorry to interrupt. - Yeah, so human is
definitely in the loop. But it's not just human is in the loop, a very large collection
of humans is in the loop. And that could be- I mean, to me, it's not intuitive that you said prime numbers, that the system can't
generate an algorithm, right? That the algorithm that
can generate prime numbers, or the algorithm that can tell you if a number prime and so on, and generate algorithms
that generate algorithms that generate algorithms that start to look a lot like human
reasoning, you know? - I think, again, we can show
that on a piece of paper. That, sure. So this is the failure in episdemiology. Like, I'm glad I even can say that word, let alone know what it means, right? - You said it multiple times. - I know, it's like three times now. - Without failure Quit while you're ahead. Just don't say it again,
'cause you did really well. - Yeah, thanks. So what is reasoning? So coming back to the chemical brain, if I could show that in a- 'Cause, I mean, I'm never gonna make an intelligence in ChemMachina, because if we don't have brain cells they don't have glial cells, they don't have neurons. But if I can take a gel
and engineer the gel to have it be a hybrid
hardware for reprogramming, which I think I know how to do, I'll process a lot more information and train models billions of times cheaper and use cross-domain knowledge. And there's certain
techniques I think we can do. But it's still missing the abilities of human beings that had
to become Turing complete. And so, I guess the
question to give back at you is like, how do you tell the difference between trial and error and the
generation of new knowledge? I think the way you can do it is this. Is that you come up with
a theory, an explanation, just inspiration comes from out, yeah. You test that, and then you see that's
going towards the truth. And human beings are
very good at doing that in the transition between philosophy, mathematics, physics,
and natural sciences. And I think that we can see that. Where I get confused is
why people misappropriate the term artificial intelligence to say, hey, there's
something else going on here. Because I think you and I both agree, machine learning's really good. It's only gonna get better, we're gonna get happier with the outcome. But why would you ever think the model is thinking, or reasoning? Reasoning requires intention. And the intention, if the
model isn't reasoning, the intentions come from the prompter, and the intentions come from the person who programmed it to do it. - But don't you think you can
prompt it to have intention? Basically start with
the initial conditions and get it going. You know, currently large language models, ChatGPT, only talks to
you when you talk to it. There's no reason why you
can't just start it talking. - But those initial conditions came from someone starting it, and that causal chain in there. So that intention comes from the outside. I think that there is something in that causal chain of
intention that's super important. I don't disagree we're gonna get to AGI, it's a matter of when and what hardware. I think we're not gonna
do it in this hardware, and I think we're
unnecessarily fetishizing really cool outputs and dopamine hits, because obviously that's
what people wanna sell us. - I mean, AGI is a loaded term, but there could be incredibly super impressive intelligence
systems on the way to AGI. So these large language models, if it appears conscious, if it appears superintelligent, who are we to say it's not? - I agree. But superintelligence, I want to be able to have a discussion with it about coming up with fundamental new ideas that generate knowledge. And if the superintelligence
we generate can mine novelty from the future, that I didn't
see in its training set in the past, I would agree that something really interesting is coming on. I'll say that again, if the intelligence system, be it a human being, a chatbot, something else, is able to
produce something truly novel that I could not predict, even having a full audit
trail from the past, then I'll be sold. - Well, so it should be clear
that it can currently produce things that are in a shallow sense novel that are not in the training set. But you're saying truly novel? - I think they are in the training set. I think everything it produces
comes from a training set. There's a difference between
novelty and interpolation. We do not understand where
these leaps come from yet. That is what intelligence
is, I would argue. Those leaps. And some people say, "No, it's actually just what will happen if you
just do cross-domain training and all that stuff." And that may be true, and I may be completely wrong. But right now, the human
mind is able to mine novelty in a way that artificial
intelligence systems cannot. And this is why we still have a job and we're still doing stuff. And, you know, I used
ChatGPT for a few weeks, "Oh, this is cool." Well, what happened is,
it took me too much time to correct it, then it got really good, and now they've done something to it. It's not actually that good. - [Lex] Yeah. Right. - I Don't know what's going on. - Censorship, yeah. I mean, that's interesting, but it will push us humans to
characterize novelty better. Like, what is novel? What is truly novel? What's the difference between
novelty and interpolation? - I think that this is
the thing that makes me most excited about these technologies, is they're gonna help
me demonstrate to you that time is fundamental and the unit future is
bigger than the present, which is why human beings are quite good at generating novelty because we have to expand our data set and to cope with unexpected things in our environment. Our environment throws 'em all at us. Again, we have to survive
in that environment. Never say never. I would be very interested in how we can get cross-domain training
cheaply in chemical systems, 'cause I'm a chemist and the
only sentient thing I know of is the human brain, but maybe that's just me
being boring and predictable and not novel. - Yeah, you mentioned
GPT for electron density. So a GPT-like system
for generating molecules that can bind to host automatically. I mean, that's interesting. That's really interesting. Applying this to the same
kind of transform mechanism. - Yeah. I try and do things that are non-obvious but non-obvious in certain areas. And one of the things I
was always asking about. In chemistry, people like to
represent molecules as graphs. And it's quite difficult. Well, if you're doing AI and chemistry, you really want to basically
have good representations so you can generate new
molecules that are interesting. And I was thinking, well, molecules aren't really graphs and they're not
continuously differentiable. Could I do something that was
continuously differentiable? Well, I was like, "Well,
molecules are actually made up of electron density." So then I got thinking, and said, "Well, okay, could there be a way where we could just
basically take a database of readily solved electron densities for millions of molecules. So we took the electron density for millions of molecules
and just trained the model to learn what electron density is. And so what we built was a system that you literally could give it a- Let's say you could take a protein that have a particular active site or, you know, a cup with a certain hole in it, and you pour noise into it, and with the GPT you turn the
noise into electron density. And then, in this case, it hallucinates, like all of them do. But the hallucinations are good because it means I don't have to train on such a huge data set, 'cause these data sets are very expensive. 'Cause how do you produce it? So go back a step. So you've got all these molecules in this data set, but what you've literally done is a quantum mechanical calculation where you produce electron
densities for each molecule. So you say, oh, this
representation of this molecule has these electron densities
associated with it, so you know what the representation is, and you train the neural
network should know what electron density is. So then you give it an unknown pocket, you pour in noise, and you say, right, produce me electron density. It produces electron density
that doesn't look ridiculous. And what we did in this case is we produce electron density that maximizes the
electrostatic potential, so the stickiness, but minimizes what we
call the steric hindrance, so the overlaps, so it's repulsive. So they, you know, make the perfect fit. And then, we then use a kind
of like a ChatGPT type thing to turn that electron density
into what's called a smile. A smile string is a way of representing a molecule and letters. And then, we can then- - [Lex] So it just generates them then? - It just generates them. And then the other thing is, then we bung that into the computer and then it just makes it. - Yeah. The computer being the
thing that, you're right, that could generate it.
- Yeah, the robot that we've got that can basically just do chemistry. - Creating the-
- Yeah. So we've kind of got this
end-to-end drug discovery machine where you can say, oh, you want to bind to this active site? Here you go. I mean, it's a bit leaky
and things kind of break, but it's the proof of principle. - But were the hallucinations,
are those still accurate? - Well, the hallucinations
are really great in this case. 'Cause in the case of
a large language model, hallucinations just make everything up. Well, it doesn't just make everything up, but it gives you an output that you're plausibly comfortable with and it thinks you're
doing probabilistically. The problem on these
electron density models is it's very expensive to
solve a Schrodinger equation going up to many heavy
atoms and large molecules. And so, we wondered if
we trained the system on up to nine heavy atoms, whether it would go beyond nine. And it did. It started to generate
molecules of 12, no problem. They looked pretty good. And I was like, "Well, this
hallucination I will take for free, thank you very much." This is a case where
interpolation extrapolation worked relatively well and we were able to generate
the really good molecules. And then, what we were able to do here is, and this is a really good point of what I was trying to say earlier, that we were able to
generate new molecules from the known dataset that
would bind to the host, so a new guest would bind. Were these truly novel? Not really, because they
were constrained by the host. Were they new to us? Yes. So I do, well, understand. I can concede that
machine learning systems, artificial intelligence systems, can generate new entities. But how novel are they? It remains to be seen. - Yeah, and how novel the
things that humans generate is also difficult to quantify. They seem novel. - That's what a lotta people say. So the way to really
get to genuine novelty, and assembly theory shows you the way, is to have different
causal chains overlap. And this really resonates with the time is fundamental argument. And if you're bringing
together a couple of objects with different initial
conditions coming together, when they interact, the more different their histories, the more novelty they generate
in time going forward. And so, it could be that genuine novelty is basically about mix it up a little. And the human brain is
able to mix it up a little, and all that stimulus
comes from the environment. But all I think I'm saying is the universe is deterministic
going back in time, nondeterministic going forward in time, 'cause the universe is
too big in the future to contain in the present. Therefore, these
collisions of known things generate unknown things
that then become part of your data set and don't appear weird. That's how we give ourselves comfort. The past looks consistent with this initial condition hypothesis, but actually we're generating
more and more novelty. And that's how it works. Simple. - (chuckles) So it's hard to quantify novelty looking backwards. I mean, the present and the future are the novelty generators. - But I like this whole
idea of mining novelty. I think it is gonna reveal why the limitations of current AI is a bit like a printing press, right? Everyone thought that when
the printing press came, that writing books is gonna be terrible, that you had evil spirits and all this. They were just books. - And the same with AI. But I think just a scale you can achieve in terms of impact with AI
systems is pretty nerve-racking. - That's what the big
companies want you to think. - But not like in terms
of destroy all humans, but you can have major consequences in the way social media
has had major consequences, both positive and negative. And so, you have to kinda
think about and worry about it. But yeah, people that
fearmonger, you know- - My pet theory for this, you wanna know? - [Lex] Yeah. - Is I think that a lotta, and I really do respect, you know, a lot of the people out there who are trying to have discourse
about the positive future, so OpenAI guys, Meta guys, and all this. What I wonder, if they're
trying to cover up for the fact that social media's had a pretty disastrous
effect on some level, and they're just trying to say, ah yeah, we should do this. And covering up for the fact that we have got some problems with, you know, teenagers, and Instagram, and Snapchat, and, you know, all that stuff. And maybe they're just overreacting now. It's like, "Oh yeah, sorry, we made the bubonic plague
and gave it to you all, and you're all dying, and, oh yeah. But well, look at this over
here, it is even worse." - Yeah, there's a little bit of that. But there's also not enough celebration of the positive impact that all
these technologies have had. We tend to focus on the negative and tend to forget that- In part, because it's hard to measure. Like, it's very hard to measure the positive impact social
media had on the world. - Yeah, I agree. What I worry about right now is like I do care about the ethics
of what we're doing, and one of the reasons why I'm so open about the things we're
trying to do in the lab, make life, look at intelligence, all this, is so people say, "What are
the consequences of this?" And you say, "What are the
consequences of not doing it?" And I think that what
worries me right now, in the present, is the
lack of authenticated users and authenticated data and- - Human users.
- Yeah, human- - I still think that
there will be AI agents that appear to be conscious, but they would have to
be also authenticated and labeled as such. 'Cause there's too much
value in that, you know, like friendships with AI systems. There's too much meaningful
human experiences to have with AI systems that I just... - But that's like a tool, right? It's a bit like a meditation tool, right? Some people have a meditation tool, it makes them feel better. But I'm not sure you can ascribe sentience and legal rights to a chatbot that makes you feel less lonely. - Sentience, yes. I think legal rights, no. I think it's the same. You can have a really deep
meaningful relationship with a dog and a pet. - Well, the dog's sentient. - [Lex] Yes. - The chatbot's right now, using the technology we use
is not gonna be sentient. - Aah, that's gonna be a
fun continued conversation on Twitter that I look forward to. Since you've had also, from another place, some debates that were inspired by the assembly theory paper, let me ask you about God. Is there any room for notions
of God in assembly theory? Whose God? - Yeah, I don't know what God is. I mean, so God exists in our mind, created by selection. So human beings have
created the concept of God in the same way that
human beings have created the concept of superintelligence. - Sure. It still could mean
that that's a projection from the real world, where like we're just assigning words and concepts to a thing
that is fundamental to the real world. That there is something out there that is a creative force
underlying the universe. - There is a creative
force in the universe, but I don't think it's sentient. So, I do not understand the universe, so who am I to say, you
know, that God doesn't exist? I am an atheist, but I'm not an angry atheist, right? There are some people I
know that are angry atheists and say, you know-
- Yeah, cranky. - Say that religious people are stupid. I don't think that's the case. Yeah, I have faith in some things, 'cause I mean, when I was a kid, you know, I was like, "Well, I need to know what
the charge of a electron is." And I'm like, "I can't measure
the charge of a electron." You know, I just gave up and had faith. Okay, you know, resistors worked. I want to know why the universe is growing in the future and what
humanity's gonna become. And I've seen that the
acquisition of knowledge via the generation of
novelty to produce technology has uniformly made humans' lives better. I would love to continue that tradition. And- - You said that there's
that creative force. Just to think on that point, do you think there's a creative force? Is there like a thing, like a driver that's creating stuff? - So I think that- - And where? What is it? Can you describe like mathematical? - Well, I think selection. I think selection.
- Selection is the force. - Selection is the force in the universe that creates novelty. - So is selection somehow fundamental? - Yeah. I think persistence of objects that could decay into nothing through operations that
maintain that structure. I mean, think about it. It's amazing that things exist at all, that we're just not a
big combinatorial mess. - [Lex] Yes. - So the fact that- - A thing that exists, persistent time. - Yeah. Let's think, maybe the universe is actually in the present. Everything that can exist
in the present does exist. - Well, that would mean
it's deterministic, right? - So the universe started super small, the past was deterministic, there wasn't much going on, and it was able to mine,
mine, mine, mine, mine. And so the process, I mean, is somehow generating- The universe is basically- I'm trying to put this into words. - Did you just say there's
no free will, though? - [Lee] No, I didn't say that. - 'Cause it everything that can exist- - Sorry. I said there is free will. I'm saying that free will
occurs at the boundary between the- - The past and the future?
- The past and the future. - Yeah, I got you. But everything that can exist does exist. - So, everything that's
possible to exist at this- So, no. I'm really- - There's a lotta loaded words there. In that there's a time element
loaded into that statement. - I think the universe
is able to do what it can in the present, right?
- Yeah. - And then, I think in the future, there are other things
that could be possible. We can imagine lots of things, but they don't all happen. - Sure. That's where you sneak in
free will, right there. - Yeah. So I guess what I'm saying is what exists is a convolution of the
past with the present and the free will going into the future. - Well, we could still
imagine stuff, right? We could imagine stuff
that will never happen. - And it's a amazing force. The most important thing
that we don't understand is our imaginations can actually change the future in a tangible way, which is what the initial conditions and physical cannot predict. Like, your imagination
has a causal consequence in the future. - Isn't that weird to you? - Yeah. - How do you- Hmm?
- It does break the laws of physics as we know them right now. - Yeah. So you think the imagination has a causal effect on the future? - Yeah.
- But it does exist in there, in the head. - It does, but-
- And it must be a lot of power in whatever's going on. There could be a lot of power, whatever's going on in there. - If we then go back to
the initial conditions. It's simply not possible that can happen. But if we go into a
universe where we accept that there is a finite
ability to represent numbers and you have rounding- Well, not rounding errors. What happens, your
ability to make decisions, imagine, and do stuff is at that interface between the certain and the uncertain. It's not, as Joscha was saying to me, randomness goes and you just, you know, randomly do random stuff. It is that you are set free
a little on your trajectory. Free will is about being able to explore on this narrow trajectory that allows you to build. You have a choice about what you build. Or that choice is you interacting with a future in the present. - What to you is most beautiful
about this whole thing? The universe? - The fact it seems to be
very undecided, very open. The fact that every time I
think I'm getting towards an answer to a question, there are so many more questions that make the chase, you know? - Do you hate that it's gonna
be over at some point for you? - Well, for me. I think if you think about it, is it over for Newton now? Newton has had causal
consequences in the future. We discuss him all the time. - His ideas, but not the person. - The person just had a lot of causal power when he was alive. But, oh my God, one of
the things I wanna do is leave as many Easter eggs in the future when I'm gone to go, "Oh, that's cool." - Would you be very upset if somebody made like a good large language model that's fine-tuned to Lee Cronin? - It would be quite boring 'cause I mean- - [Lex] No novelty generation? - If it's a faithful
representation of what I've done in my life, that's great. That's a interesting artifact. But I think the most interesting thing about knowing each other is we don't know what we're gonna do next. - Sure. (sighs) Sure. - I mean, within some constraints, I can predict some things about you, you can predict some things about me, but we can't predict everything. - [Lex] Everything. - And it's because we
can't predict everything is why we're excited to come
back, and discuss, and see. So yeah, it'll be
interesting that some things that I've done can be captured, but I'm pretty sure that
my angle on mining novelty from the future will not be captured. - Yeah. Yeah. So that's what life is, it's just some novelty generation, and then you're done. Each one of us just generate a little bit. Or have the capacity to, at least. - Selection produces life and
life affects the universe, and universes with life
in them are materially and physically fundamentally different than universes without life. And that's super interesting. And I have no beginnings of understanding. I think maybe this is like in 1000 years, there'll be a new discipline, and the humans will be like, "Yeah, of course, this is
how it all works, right? - In retrospect, it will
all be obvious, I think. - I think assembly theory is obvious, that's why a lot of
people got angry, right? They were like, "Oh my God,
this is such nonsense." - Yeah.
- You know, and like, "Oh, you know, actually, it's not quite, but the writing's really bad." - Well, I can't wait to
see where it evolves, Lee. And I'm glad I get to exist
in this universe with you, you're a fascinating human. This is always a pleasure. I hope to talk to you many more times, and I'm a huge fan of just watching you create stuff in this world. And thank you for talking today. - It's a pleasure, as always, Lex. Thanks for having me on. - Thanks for listening to this
conversation with Lee Cronin. To support this podcast, please check out our
sponsors in the description. And now, let me leave you with
some words from Carl Sagan. "We can judge our progress by the courage of our questions and the
depth of our answers, our willingness to embrace what is true rather than what feels good." Thank you for listening, and hope to see you next time.