[MUSIC PLAYING] TICHO TENEV: Hello. Welcome to Talks at Google. My name is Ticho
Tenev, and tonight, I have the privilege to
interview Professor John Lennox about his book, "2084-- Artificial Intelligence and
the Future of Humanity." There will be an
opportunity for the audience to ask live questions
during the last 15 to 20 minutes of the interview. So please, stick around. You will be able to post your
questions in the chat window below the YouTube presentation. Dr. Lennox is emeritus
professor of mathematics at Oxford University, an
emeritus fellow in mathematics and philosophy of science
at Green Templeton College, and an associate
fellow of Said Business School, Oxford University. He's an internationally
renowned speaker and author on the intersection of science,
philosophy, and religion. He has debated Richard
Dawkins on "The God Delusion," the late Christopher
Hitchens on the New Atheism, as well as Peter Singer on the
topic of the existence of God. Dr. Lennox's recent
documentary, titled, "Against the Tide" deals
with his public defense of Christianity and features
the actor Kevin Sorbo in conversations with him. It is now available
on DVD and Blu-ray and has its own website. Dr. Lennox's website
is johnlennox.org. Today's interview was organized
by the Christian chapter of the Inter Belief
Network at Google, or IBN. IBN is an employee
resource that provides a voice to googlers of religious
or belief-related communities. IBN's vision seeks to
create a thriving community where googlers are empowered to
safely practice their beliefs, setting the gold standard for
mutual respect, understanding, and allyship. I would like to thank Dr.
Lennox, the Talks at Google team, IBN, and Barbara
Philips from IBN, for making today's interview possible. And by the way,
there will be a link to the book in the [INAUDIBLE]
of the YouTube presentation. So with this, please help me
welcome Dr. Lennox to Google. Welcome, Professor Lennox. JOHN LENNOX: Thank
you very much. I really am honored to speak to
all those googlers out there. It's a great pleasure for me. TICHO TENEV: Great. The title of your book,
it's an interesting title. It reminds us of the famous
novel by George Orwell, "1984." It is a dystopia. But your book is
not quite like it. It's not entirely dystopian
or entirely futuristic. It talks not only
about future things but also about things that
are taking place right now. Some of them are very disturbing
and others very encouraging. By reflecting on AI, you raise
questions about information and intelligence, you weigh
in on science and religion, while fundamentally exploring
what it means to be human. You end on a very
positive note, and I would like to get back to that
a bit later in the interview. But right now, let
me just ask you-- very first question is really
about some terminology. In the book, you talk about
narrow AI, general AI, or AGI, and transhumanism. These are central topics. Could you explain
what they are and how they relate to each other? JOHN LENNOX: Well, I shall
certainly try to do that. Narrow AI is typically
represented by an integrated system consisting
of a large database and a powerful computer
implemented with an algorithm. It's designed to perform a
single task that normally requires human intelligence. For instance, the database could
be a million X-ray pictures of diseased lungs. They are labeled with their
diseases by expert doctors. Then, an X-ray is
taken of my lungs, and the AI system is
used to rapidly look through the database and find
a match for whatever disease I have. The diagnosis will probably
be more accurate than what I would get at my local hospital. The important thing to
realize is the system itself is not intelligent,
although it has a great deal of intelligent human input. It simulates intelligence--
hence, artificial intelligence. And many such systems
are running today with great promise of benefit. Another example of narrow AI
is Google's recent success-- amazing-- using
artificial intelligence to cut down the time needed to
design a new chip from months to six hours. Now, artificial general
intelligence, or AGI, is very different. The goal here is to build a
system that can do everything a human can do but much
faster and better-- hence, the idea of
transhumanism, the desire to move beyond the human to
create a super intelligence. There are two
approaches to this-- firstly, enhancing
existing humans by merging them with technology,
as cyborgs do, or, secondly, starting from scratch and
creating an AGI that does not rely on biology but is
based on silicon, say, so that it has a much
better degree of permanence than offered by
an organic system. Now, of course, opinions
regarding how near we are to achieving
AGI differ greatly. Some highly intelligent
people expect it to arrive in some form
or other possibly soon. Last Saturday's
news feeds told us that the Google UK-based
laboratory, DeepMind, which is very famous, thinks
that powerful reinforcement learning using a principle
of reward maximization will be enough to achieve AGI,
whereas, on the other hand, the London Institute of
Mathematical Sciences says, far from
approaching AGI, AI has not progressed beyond high
dimensional curve fitting. They ask what
mathematical insights could lead to more intelligent
AI, such as causal reasoning, functional modules,
or a representation of the environment. And one very important
thing to bear in mind is that, in humans, intelligence
is linked to consciousness. But the drive for AGI,
in enhancing humans, it will keep the consciousness. But many people feel
consciousness is not necessarily. What is necessary is the
advanced intelligence. TICHO TENEV: Now,
it's interesting that, a lot of times, when
we talk about AGI, we imagine all of these
ethical problems with it. But your book makes it
clear that, even today, with narrow AI, there is already
enough ethical challenges. Could you describe some of them? JOHN LENNOX: Yes,
there are very many. For example, most
of us voluntarily agree to having AI tracking
technology in our smartphones. And that can be very
useful, for example, for ordering books and so on. However, without
our permission, it is harvesting vast
amounts of information that is actually being sold
on in a lucrative commercial operation. That's the subject of
a very important book by Shoshana Zuboff
of MIT entitled "Surveillance Capitalism." And such an intrusion
of privacy raises obvious ethical problems. Another example has to do with
facial recognition technology. It can be used to
help police capture criminals, a positive good. But also, the same technology
can be used intrusively to exert social control
and discrimination in what we may well call
surveillance totalitarianism. This is Orwell's big
brother prophecy come true. And it needs to be emphasized
this is happening now, for example, with the
so-called social credit systems that are being
rolled out in some countries. And that development poses
a real threat, obvious one, to personal corporate privacy
and, indeed, to human rights. TICHO TENEV: Yes, indeed. What about future scenarios,
something like "The Terminator" movie, perhaps, or in your book,
you call it superintelligence, so when artificial
intelligence surpasses that of our human beings. What is your perspective
on the potential dangers of something like that? JOHN LENNOX: Well,
let's first say that there are many serious
and highly intelligent thinkers that hold that some
form of superintelligence, or AGI, will come. Our Astronomer Royal,
Lord Rees, is one of them. And James Lovelock, the
author of the Gaia hypothesis, thinks that, perhaps even
before the end of this century, robots may rule the world
and be in charge of humans. That is, he says, if any
still exist at that time. Because he has no idea
whether the takeover will be peaceful or not. Physicist and
cosmologist Max Tegmark, president of the Future of
Life Institute at MIT said, "In creating AI, we're
birthing a new form of life with unlimited potential
for good or for ill." And the director of
Cambridge University's Center for the Study of
Existential Risk said, "We live in a
world that could become fraught with hazards
from the misuse of AI, and we need to take
ownership of the problem, for the risks are real." The late Stephen
Hawking and Elon Musk expressed fears
that AI could lead to the extinction of humanity. And in his last book, Hawking
wrote, "The real risk with AI isn't malice but competence. A super intelligent AI will be
extremely good at accomplishing its goals. And if those goals
aren't aligned with ours, we are in trouble." But there are other
people who think we're not really near to achieving AGI. John Mariani, who's
an expert on aging, and neuroscientist
Daniella [? Tritsch ?] say, "Many have suggested that
human intelligence may soon be outstripped by
artificial intelligence. But this fear betrays
a deep misunderstanding of what human
intelligence really is." On the other hand, Google's
DeepMind laboratory announced that
reinforced learning working on the principle
of reward maximization is enough for AGI. We shall see. TICHO TENEV: Yes, indeed. Well, many of us are
fascinated with AI because it seems to offer a
hands-on approach to answering the age-old question of
what it means to be human. I am actually one of
those people myself. I wanted to study AI because I
felt it will help me understand what makes us tick. It also promises a means to
take control of our environment and make us somehow more
than what we actually are. In what ways do
you think pursuing AI helps us understand
what it means to be human? JOHN LENNOX: Well, of course,
any research of this kind, in narrow AI, for
instance, involves us in trying to understand the way
in which human beings function at the moment and
do various tasks. And so, by studying
them in the light of AI, that can help us know
more about ourselves. And that actually
works both ways. Understanding animal
vision systems has given rise to all
kinds of facial recognition technologies. However, the question of human
identity and significance becomes much more prominent
in connection with AGI. TICHO TENEV: Yes. You connect the interest
in AGI and transhumanism to the notion of a very,
very ancient concept. Perhaps the name is not ancient,
but you call it Homo Deus. But the basic idea is the
desire to become godlike. What is the key idea here? What is this all about? JOHN LENNOX: Well, Homo Deus
says simply Latin for God-man, or man who's God. And it's the title of a
best-selling book on AI by Noah Yuval Harari,
who's an Israeli historian. As you say, the idea of humans
becoming gods is very ancient. According to the biblical
Book of Genesis, God the creator placed the first
humans in the Garden of Eden and told them that
they could eat of all the trees except
the tree of the knowledge of good and evil. Now, this tells us that God
gave them the wonderful gift of free will. They were not deterministic
robots doing simply what they are programmed to
do and, therefore, incapable of wonderful things like love. And that, by the
way, just in passing, poses a difficulty
for the development of machines that are emotionally
and morally intelligent. However, so the Genesis
account tells us God's enemy suggested
to the first humans that God was against
them and wanted to keep them from realizing
their full potential. They were told that if they
ate the forbidden fruit, they would become like
God, knowing good and evil. And here's the origin
of the Homo Deus idea. That actually was a lie. The humans ate the
fruit, and they found that the knowledge they
got was not something anyone would wish for. It plunged the world into
rebellion against God and the results of which
we see all around us in our fractured world and
fractured human nature. And all through history,
we have seen the desire to be God manifested--
megalomaniac emperors in ancient Babylon, Rome,
and through history, even to recent times, in
my living memory. Now, what Harari
tells us is this. He says having raised humanity
above the beastly level of survival
struggles, we will now aim to upgrade humans
into gods and turn Homo sapiens into Homo Deus. And then he adds, but think
in terms of Greek gods. And how is that going to happen? He thinks by three types of
engineering-- biological, cyborg, and AI, pointing out
that, every day, millions of people decide to grab their
smartphone a bit more control over their lives, or they
try a new and more effective antidepressant drug. In pursuit of health,
happiness, and power, humans, he says, will gradually change
first one of their features and then another and
another until they will no longer be human. And in an interview,
he added, humans are very likely to be upgraded
into gods within a century or two, at most. TICHO TENEV: Wow. Let's shift to a slightly
different topic but very much related. You write about information as
an indication of intelligence and, ultimately, of an
intelligent [INAUDIBLE].. For example, you quote Francis
Collins referring to DNA as God's language. Well, Google's
mission statement is to organize the
world's information. You might say that Google is
in the business of information. Google is also developing
AI, so these two concepts are clearly related,
from Google's perspective. What is your take
on the relationship between information
and intelligence? JOHN LENNOX: Well,
of course, this is a concept that
absolutely fascinates me, as a mathematician. We basically two
kinds of information-- syntactic information
of the kind described by Claude Shannon, and a great
deal is known about that. And then, there's
semantic information, the kind that conveys meaning. And that is very
difficult to define. Now, you refer to
Francis Collins. The human genome contains a
vast database of information for making human beings. DNA is a word with over 3
billion letters in a chemical alphabet-- A, C, G, and T. Now, when we
see any sequence of letters or symbols that have a
semiotic dimension-- that is, they carry meaning, even a
four-letter sign for exit-- we immediately infer
that its existence involves not only
machine-like processes but human intelligence. And it seems to me very
clear that the language-like structure of DNA points to the
existence of an intelligence external to our world-- the mind of God, the creator. In fact, one of the most
important statements about this is the intriguing,
very first sentence in the Gospel of John
in the New Testament-- "In the beginning was the Word,
Logos, and the Word was God. All things came to
exist through him." Let me put it this way. We live, it would appear,
in a word-based universe. And I see science
as confirming it. TICHO TENEV: Absolutely. Well, I have a
question that I have been wondering for a
long time, and I hope you can give me the answer. Just based on
experience-- observation-- to build a complex system
takes up, requires, a lot of information. You may call it the
design blueprints. Or you can have a very
complex system that builds another complex system
with little information, but the building
complex system already has the information in it. So it seems to me
that information has to come from somewhere to
build something complex, which leads me to believe that there
should be a kind of information conservation law. Do you think we could
formalize a conservation? JOHN LENNOX: Yes. This is a fascinating
observation. And, you know,
Leonard Brillouin, who was author of the landmark
book, "Science and Information Theory," he seems
to agree with you. He said, a machine "does not
create any new information, but it performs a very
valuable transformation of known information." And in keeping with this, one
of my intellectual heroes, Sir Peter Medawar, who won the
Nobel Prize, he wrote this. "No process of
logical reasoning, no mere act of mind or
computer-programmable operation, can enlarge
the information content of the axioms and premises
or observation statements from which it proceeds." And for Medawar, this
pointed to the existence of some kind of law of
conservation of information in the sense that
there are limits to what can be derived from
a given set of physical laws. He thought that might mean that
certain things may actually be unknowable to science. And he challenged scientists
with this very interesting challenge-- can you find a
logical operation that would add to the information
content of any utterance whatsoever? Leading researcher on
the origin of life, [? Brent ?] Olaf [? Cooper, ?]
says something similar. "There is no
complexity-generating machine that can generate more
complexity than is contained in its input." And that's the key statement
to which you referred. This fits, he says, with
our fundamental experience, that there's no natural process
that leads to an enrichment without cause or
creation out of nothing. Now, technically,
that seems to me to mean that no Turing
machine can generate any information
that does not either belong to its input or its
own informational structure. And so theoretical computer
science of this kind does seem to support the
idea that some kind of law of conservation of information
exists, as you suggest. Now, what fascinates me about
that is that it would rule out the origin of life by unguided
natural processes working on raw chemistry and,
therefore, has huge implications for our understanding that the
origin of life will involve, certainly, processes that can
be described scientifically, but there must be an
input of information from outside the system. TICHO TENEV: Yes, and
I would definitely like to contribute
in some way, and I hope others will be interested
in this topic, as well. So I'm going to change topics
once more so we can move on. You often speak
about the harmony between science and religion. Could you explain why you
think that belief in God is compatible with
being a scientist? JOHN LENNOX: Well,
there's harmony, I believe, between science
and some religion but not all. And I cannot speak
for other religions, but I certainly can
speak for Christianity. And I find my first
reason, actually, in the history of
modern science itself. Think of the great pioneers-- Galileo, Kepler, Boyle,
Newton, Faraday, Clerk Maxwell. Every one of them
believed in God. And that was no accident. The famous historian and
philosopher of science, Sir Alfred North
Whitehead, held, as put so succinctly by
CS Lewis, that, I quote, "Men became scientific. Why? Because they expected
law in nature. And they expected law
in nature because they believed in the legislator." In other words, the faith
that these scientific geniuses had in God, far from
hindering their science, was the very motive
that drove it. Secondly, people often
think that God and science are incompatible, as they are
the same kind of explanation, and so they conflict. But this is just not true. They are not the same
kind of explanation nor are they alternative
explanations. Let me spell it out this way. The science explanation,
roughly speaking, deals mainly with what
the universe consists of and how it works, whereas
the God explanation deals with the why of its origin,
meaning, and purpose. And the God explanation
no more conflicts with the scientific one
any more than Henry Ford and automobile engineering
conflict as explanations for the motor car. They are different kinds of
explanation, yet both of them are necessary. Now, the idea that
God and science are in essential conflict is
very easily seen to be false. Let's take the Nobel
Prize for physics. It was won in different years
by the Irish physicist Ernest Walton and the
Scotsman Peter Higgs. What divides these men is
certainly not their physics. They both won the Nobel Prize. But something does divide
them, and it's their worldview. Walton was a Christian,
and Higgs is an atheist. Now, let's grasp this
carefully because it's a key to understanding
this whole topic of science and religion. There is a conflict,
a very real one, but it is not between
God and science. The real conflict is between
opposing worldviews-- atheism and theism-- and there are scientists,
brilliant scientists, on both sides. And that means that the
question we should be asking is which worldview fits
best with science, atheist or theism? TICHO TENEV: Mm-hmm. Yeah, well, I grew up
in a communist country, and I was taught that religion
is a means to oppress the mind and keep people in subjugation,
whereas science liberates and frees thought. I still hear the
same view today. What would be your
succinct response to a perspective like that? JOHN LENNOX: Well, certainly,
very sadly, some religions may well suppress
scientific ideas but not genuine Christianity,
as I just explained from a historical perspective. I have personally found
Christianity mind-expanding and a great motivator for doing
mathematics and science itself. TICHO TENEV: Great. Well, you certainly
made a very strong case that Christianity
supports science. What about the other way around? Do you think atheism
has a blind spot? And if so, what are the benefits
of a biblical perspective? JOHN LENNOX: Well, I
do think that atheism has a serious blind
spot, and it's this-- following atheism to
its logical conclusion actually undermines
the rationality we need for science. I often ask scientific
colleagues, for fun, to tell me about
what they do science. They usually say, well,
the brain or the mind. Some of them don't
believe that there is such a thing as the mind. I happen to. But that's not the point. So I say, tell me
the brief history of the brain with
which you do science. And they tell me
that the brain is the product of mindless,
unguided, naturalistic processes. And I look at them, and I smile. And I say, and you trust it? Tell me honestly, would you
trust the computer or AI system that you use every
day if you knew that it was the product
of mindless processes? Now, here's the
interesting thing. I have always pressed
for an answer, and I've always had the
answer, no, I would not. So I say to them, I see
that you have a problem. You're giving an explanation
of the brain that undermines the rationality that you
need, not only to do science but to formulate any
argument whatsoever, even one about the brain. The point is, as my teacher of
quantum mechanics at Cambridge, Sir John Polkinghorne,
said long ago that the reduction of mental
events to physical events, which is a logical corollary
of atheist physicalism, destroys meaning. And I've often put it this way,
that atheism not only shoots itself in the foot--
that's painful-- it shoots itself in the brain. And that's fatal. Now, to contrast with that,
the biblical worldview asserts that there is a
creator of the universe and the human mind, and that
validates our rational powers, as the early pioneers
of science saw. But not only that,
the biblical worldview teaches that all
human beings are made in the image of
God, which gives us a unique value and
dignity and sets the ethical foundations for
civilized life and research. Science fits very well with
theism and not really at all with atheism. In fact, I go as far as
to say the real conflict is not between science
and faith and God. It's between
science and atheism. TICHO TENEV: Then
what would you say is the relevance of
your Christian faith to the desire, the quest, for
superintelligence and the AGI? JOHN LENNOX: It is
extremely relevant, and that's one of the reasons
I wrote the book "2084." And the clearest
answer, I think, to this is to quote Harari's "Agenda
for the 21st Century." He says there are two
things that need to be done. First, solve the problem
of physical immortality. And his view is that every
technical problem has a technical solution,
and physical death is a technical problem. We don't need to wait
for the Second Coming-- I'm quoting him-- in
order to overcome death. And secondly, he says, the next
big agenda for the 21st century is to intensify human happiness. To do that, he says,
quote, "We shall need to reengineer
Homo sapiens so that it can enjoy
everlasting pleasure, and we will now aim to
upgrade humans into gods and turn Homo sapiens
into Homo Deus." (CHUCKLING) Now, my
reaction to this, which may surprise some of you,
is to say, you're too late. You're simply too late. The problem of physical death
was solved 20 centuries ago by the true Homo Deus, Jesus
Christ, the man who really was God, who died and was
physically raised from the dead by the power of God. He has broken the hold
that death has on humanity. Transhumanism seeks to
turn humans into gods, but the core message
of Christianity is that there's been a movement
in the opposite direction. God himself has become human. Now, this is so revolutionary
that I plead with people to listen to what it has to
say before they reject it out of hand. Because, you see, Christ
promises that everyone who hears his word and believes that
God sent him will receive a new life, will not come
into judgment-- that's the moral issue-- but has already passed
from death to life. That person also enters
into peace with God. And who of us is there
that does not want peace? That person will receive a
new joy and power to live. But there's even
more to it than that. Those who seek to upgrade
humans should listen carefully to the Christian scenario. It is this, that Jesus will
one day return to this Earth and raise from the dead all
those who have trusted him. And to use contemporary
AI language, he will upload them physically
into a new and very real world that we know as heaven,
where they will live forever, exploring the wonder
of God and his works. It's very interesting to me to
know that the word "transhuman" was first used in
connection not with science but in an English translation
of Dante's "Paradiso," where Dante tries to imagine the
resurrection of his own body, saying "Words may not tell
of that transhuman change." This is the true transhumanism. Now, of course, this all depends
on Christianity being true. But as I've written
in my book, "2084," there's a great deal more
evidence for its truth than there is for the
realization of Harari's transhuman dreams. Indeed, we need to be
very careful, don't we, before we start trying
to reengineer humans. Why? Because human beings
are unique creations made in the image of God,
making them so special that God himself became one
of them, as Jesus Christ. The Word became flesh
and dwelt among us. TICHO TENEV: Yeah, I find it
really interesting and ironic at the same time that,
all this quest for AI, it has been answered. In fact, your book presents
this development of AI and the dream of AGI in
this very ancient context but still relevant, the idea
of creating a Homo Deus. And you show that
there are really two paths that
humanity has followed from the very beginning. One is humanity trying to become
godlike by its own devices. And the other one is
God actually becoming human, the second
being the genuine path. And when I was
reading your book, it was quite troubling to me
to see what kind of society we're becoming as we follow
the counterfeit path, the other path. But also, you bring a
great amount of hope, telling us what
we could become-- in fact, what God has
promised us we'll become if we follow the genuine path. So in terms of AI,
there are clearly potentially dangers with it. But does that mean we
should not do AI research? JOHN LENNOX: No,
we should indeed. We should do AI research. And I'm glad you asked that
question because I would actively, and do, encourage
bright young people of scientific minds to
get into AI research, first of all because of the
great good they can do-- for instance, the work
of Rosalind Picard, outstanding in her Affective
Computing lab at MIT. She successfully developed
facial recognition, AI technology that can
see from a child's face whether it is about
to have a seizure. And she can prevent
that happening. It's a wonderful
thing, as you will see if you look at her TED talk. Also, from what we've
said, it's obvious that we need people
working in AI that can think ethically about it. You see, the ethics built
into any kind of AI system will be the ethics of a
person or group of people. And as Christians, we
should be at the table to influence that
input positively. I was watching a lecture
by Jordan Peterson recently on Genesis. And he paused and
said the statement that God made human beings
male and female in his image is a foundation of
all civilized life. I believe that that
basis for ethics need to be brought to
the table of AI research, particularly when it involves
the nature of humanity. TICHO TENEV: Well,
thank you very much for this
insightful conversation. I have just one more
question before we open up for the live Q&A
from the audience. And let me just once
again remind our audience that you can type in your
questions in the chat window below the YouTube presentation. So my last question for
you is, what would you say to encourage those of us who
work in the high-tech industry when we face
challenges living out our faith and
spiritual convictions? JOHN LENNOX: Well, I
think the first thing is to do good scientific work
to the best of our ability. And secondly, as
we do that work, to be people of integrity,
willing to help our colleagues be team players of such
quality that others eventually ask us what lies behind
it all and give us an opportunity, in a
non-threatening way, to present to them a
reason for the hope that we have in Christ based on
his historical resurrection from the dead. At this point, many of you
may say to me, come on. How can you believe that kind
of history as a scientist? Surely, that's impossible. OK, well, let's take
the history first. One of the world's leading
contemporary historians and experts on the New
Testament is NT Wright. And he concludes this,
the historian of whatever persuasion-- and that means
atheists and Christians and all the rest-- has no option but to affirm
both the empty tomb of Jesus and the meetings with
him as historical events. I regard this
conclusion as coming in the same sort of category
of historical probability so high as to be virtually
certain as the death of Augustus in AD 14 or the
fall of Jerusalem in AD 70. Oh, but you say,
we're scientists. He's a historian. OK, you mentioned
Francis Collins earlier. And in a fascinating interview
in "Scientific American" with John Horgan, Francis, who's
director of the NIH in the USA and formerly director of
the Human Genome Project, said this, "My
first struggle was to believe in God, not a
pantheist God who is entirely enclosed within
nature or a deist God who started the whole thing
and then just lost interest, but a supernatural God who is
interested in what is happening in our world and might, at
times, choose to intervene. My second struggle was to
believe that Christ was divine, as he claimed to be. As soon as I got
there, the idea that he might rise from the dead
became a nonproblem. I don't have a problem
with the concept that miracles might
occasionally occur at moments of great
significance, where there is a message
being transmitted to us by God Almighty. But as a scientist,
I set my standards for miracles very high." And like Francis
Collins, I have found that the evidence for
the resurrection of Jesus satisfies those standards. And you can see more
about that on my website, johnlennox.org, because that's
the beginning of a big story. Thank you so much
for joining us, and thanks to Ticho for a
wonderful set of questions. TICHO TENEV: Thank
you very much. I really enjoyed this
conversation with you. We actually have a few
questions already cued up. And I will start
with the first one. Andrey is asking,
Professor Lennox, thanks for the amazing talk. What role should religion play
in defining the future of AI? JOHN LENNOX: Well, I
think that the main area-- and this is a very
complex thing-- it's very obvious to me, reading
the kind of ethical research that's going on,
that this is where the input needs to be put. Now, you find, across
religions, there are certain basic
common agreements, say, on the value of life or
on honesty and integrity and so on. Those are hugely important
moral principles. Without them, society
would be impossible. And therefore, it seems to me
that, at that initial level, it's very important
that we think through the implication
of that common, basic set of moral rules and the way
in which we implement them into a computer system. For instance, in
self-driving cars-- here's an obvious case-- you've got to program
the sensors to avoid certain things. But what do you do if the sensor
picks up an old man crossing a road with a cart and a
donkey, and the alternative is to drive into a bus
queue of young children. You've got to program
the ethics into that. Because the computer
is non-moral. The algorithm is not moral. So it must have moral
base to decide on. So I think that people with
religious moral convictions can and ought to get involved. Because worldview
really determines our moral convictions,
to a great extent. And therefore, we have a right
to sit together and compare and give our inputs into
this kind of question. TICHO TENEV: Kass
is asking, what are your thoughts on phenomena
like black hole information paradox or non-deterministic
interpretations of quantum mechanics, that randomness
may have a role to play in emergent information? JOHN LENNOX: I wish
I knew more about it. But from what I can
see, the key problem-- and it's the one I mentioned
about the origin of life-- the creation of
linguistic-like streams of DNA, of symbolic information, the
only source that we have ever come across in terms of our
empirical experience is mind. We have never come
across any other source. And I find it very difficult
to see how anything-- life-- how it can
come out of non-life, language come out of raw
physics and chemistry. And if you read serious-minded
physicists who've read this kind of
thing, it's pretty clear to me that this problem,
which was raised in 1953, when there was the
Miller-Urey experiment. And they thought they'd
solved the origin of life by passing electricity
through a discharge tube. And they produced
some amino acids, which are the basic
building blocks of life. 1953 is a long time ago. And we're actually further
away from the solution. What happens is that
none of these scenarios-- even the ones with very
complicated backgrounds, like quantum
indeterminacy and so on-- none of them, so
far as I ever read-- and I've tried to
read as much as I can, particularly recently
on quantum information-- can produce ab initio
without human input, and human intelligent input
can produce any information. And so, it corresponds
exactly to the problems raised by the notion of laws
of conservation of information. Now, there's a lot more
work to be done on this. I've no doubt about that. TICHO TENEV: Yeah, and if
I could add one point here, I think, in addition
to just information, there has to be an intention. As far as I can
tell, all information comes from some
kind of intention. JOHN LENNOX: Yeah, I think
that's absolutely right. That's a very important faction. Because many people
will question any level of intentionality
or teleology in the universe. And often, they exclude that
and say, that's outside science. But they don't mean that. They mean it's not true at all. And I think one
great problem is-- and we need to beware of it-- there's a very widespread view
that science and rationality are coextensive. And that's just not true at all. But that idea that
science is the only way to truth, which we call
scientism, is highly dangerous. And actually, it's logically
self-contradictory. Because the statement-- science
is the only way to truth-- is not a statement of science. So if it's true, it's false. TICHO TENEV: [CHUCKLES] Next question. Glenn is saying,
Professor Lennox, thank you for this
wonderful talk. Where do you draw the line
between God's creative work in Genesis 1 and
evolutionary science? JOHN LENNOX: Well, this
is a fascinating question. And I'm glad to be
able to tell you that I have a new book coming
out called "Cosmic Chemistry-- Do Science and God Mix?" And I have done a lot of
work in updating a book that I wrote some time ago. And this is one of the
central arguments in it. Drawing a line-- well,
what I would say is this. Without going into huge
detail because you'd need to look at my
website for that, the basic problem is, first
of all, a confusion about what evolution is and does. We have a mechanism, clearly-- natural selection,
mutation, genetic drift, and a few other things-- and
it clearly does something. That's very clear. And we can trace adaptations
and various things back to that mechanism. But the question, the deep
question, is, is this creative? We can say it'll
produce variation. Now, for a long time,
there was great confusion created by Richard Dawkins
in his book "The Blind Watchmaker." Because he says that this blind
mechanism, natural selection, is responsible, I quote, "for
the existence and variation in all of life." He has admitted, and it
took him a very long time to do so, that that is not true. It may be responsible for
some of the variation. That's pretty obvious. We can see that happen. But evolutionary
processes, by definition, cannot be responsible
for the origin of life. Why? Because you need to have
life before natural selection or mutation can operate. And it's there we
need to concentrate. And, you see, if we are thinking
of naturalistic material mechanisms, we're thinking, in a
sense, abstractly of a machine. And if we regard whatever those
mechanisms as machines, then, the Church-Turing
thesis says they can be simulated by a Turing machine. And if a Turing machine cannot
produce any information beyond what's in its input our
informational structure, that shows you that you cannot
get the origin of life without an intelligent
input from outside. The meaning of a system will
not be found inside a system. As Lord Jonathan Sacks, the
Jewish polymath chief rabbi said in one of his books. TICHO TENEV: That's
a good point. The next question, actually,
is from two people. Alan and Sophia both
are asking, what is your opinion on cautiousness? Is it completely reliant on the
brain as a computing mechanism and, therefore, replicable
by artificial intelligence, or is there something
supernatural occurring? JOHN LENNOX: Oh, this
is a wonderful question. And I'm sure my answer
will not satisfy you. But it's a great
question because it's clear there's a connection
between consciousness and the human brain. Because we can
measure, for example, the result of what
we see in terms of electrical stimulation. So what I can say
from my reading-- and I've actually
written a little bit, a chapter in a monograph on
this because it fascinates me-- that there's a brain story,
and there is a mind story, but they're not the same. You see, a neurologist can tell
what's going on in my brain, but he can't tell what's
going on in my mind. I can tell what's
going on in my mind but not what's going
on in my brain. So there appears to be
a fundamental difference and a very clear
coupling between the two. But mind and brain, it seems to
me, are simply not identical. Now, it was very
unpopular to say that until relatively recently,
where some leading thinkers like David Chalmers
are beginning to rethink the materialistic view. You see, information--
coming back to information--
information is not material. It may reside or
material substrates, but it isn't material. And to my mind, that's
the end of materialism. So we have this
great mystery that's totally opaque to science at the
moment and to everything else, as far as we can see. What is consciousness? Nobody knows what it is. And if you say, will AI be
ever able to simulate it, well, first of all, you have to
know what it is before you can move in that direction. Nobody knows what
consciousness is. And I've consulted the
world's leading thinkers on it and read their stuff. They just simply do not know. And that's why I said
earlier in the talk that this quest for artificial
intelligence, AGI, some people are saying we don't need to
be concerned about replicating consciousness. We may not be able to do that. It doesn't matter. What we need is more
and more intelligence. We don't need awareness. But, of course, that means that
we'd never replicate anything like a human being. So it's a great
question to think about. TICHO TENEV: Yeah. Related to what we have
been just discussing, John Baumgardner is asking,
since language is simply encoded meaning and
meaning is nonmaterial, is there any rational
basis for concluding that matter, either
in biology or silicon, can possibly generate
linguistic messages? JOHN LENNOX: No, I just
don't think there is. And actually, that relates
to part of the preceding question that says, is
there something supernatural about the mind? Now, this is a very
interesting question. And the best writer on this
topic, I believe, is CS Lewis. Because he pointed
out that if you take a naturalistic
explanation of the mind, as I mentioned earlier,
you reduce the mind to physics and chemistry,
and you destroy all meaning. So if a naturalistic
explanation-- a physicalist explanation--
of mind destroys all meaning-- and we know that we
have meaning and we can see meaning and so on-- that means that
the mind itself has a non-naturalistic dimension. In other words, it has a
supernaturalistic dimension. I would put it this way--
you might find this very provocative-- I don't need to start with
the resurrection of Jesus and his miracles
to see that there is a supernatural dimension. I start with myself. I start with you. And I believe that we,
part of the fact that we're made in the image
of God, is that we have a real supernatural
dimension inside of us, so to speak, connected with our
physiological and material substrates but not
identical with it. That is a thing that
needs to be explored. And, of course, the
mystery of consciousness sits right in the center of it. TICHO TENEV: And perhaps
with some more math, we can work it
out, or maybe not. Misha Namilav is asking, could
you possibly expand on the idea that human brain could
not evolve naturally? There are many examples
of simple models and laws producing complex behavior-- "Game of Life," three-body
problem, neural networks. And Misha had an
expanded answer. Yes, there we go. JOHN LENNOX: Yes, of course. There are fascinating models. There's a theory of emergent
self-organizing systems and all this kind of thing. The trouble is that
none of these produce-- they do produce
complex behaviors, but they don't produce
linguistically complex sentences. And that's a key difference. You see, complexity
is one thing, but linguistic complexity is
a completely different thing. A stone or a rock is
immensely complex, but it's not
linguistically complex. And I think it's so important
to distinguish these two things. Because we have plenty of
mechanisms and explanations for various kinds of complexity,
and they are fascinating. Watch a crystal
forming, for instance, and crystals can be
extremely complex. But linguistic-like complexity,
such as we find in DNA and in words, in
general, doesn't come from those systems. And I've tried
very hard to write about this in great detail in
my new book, "Cosmic Chemistry-- Can Science and God Mix?"
which will be out in September, I hope. TICHO TENEV: Hopefully, an
easy question from Shilpi. She's relaying her
son's question. If an AI was able to
perfectly mimic a human-- emotions, reason and so on-- do you think it would
go to heaven upon death? JOHN LENNOX: Well, that's
a hypothetical question. And what do you
mean by mimicking? This is one of
the old questions. The Turing test-- if a
machine, say an AI machine, can answer questions
spoken to it by human and deceive that person
into thinking it is a human, is it a human, and should
we treat it as a human? And so the word
"mimicking," that's the problem, and "simulating." And there are lots of
things we don't know. God has not told us everything
that it is possible to know. But this is such a speculative
and hypothetical question that I would run a great risk
in even attempting to understand it, let alone answer it. So I do apologize to the
ingenuity of the questioner. TICHO TENEV: Tiziano is asking-- I hope I pronounced
your name right-- what is your view
on free will and AI? Is free will real
or an illusion, and can AI ever achieve it? JOHN LENNOX: Wow, how
long have you got? TICHO TENEV: [LAUGHING] We've got four minutes. JOHN LENNOX: Free will Is
a hugely important concept. And I hinted at it in my talk. I said one of the things that
Genesis makes very clear to us is that human beings have
at least a certain freedom of choice that makes
them moral beings. Because if you have no freedom,
you cannot be a moral being. And that's why
most of us believe that we have some
degree of freedom because we are moral beings. Now, I know there's a
whole range of thinkers that deny the actuality of free
will, like the late Stephen Hawking and so on. But it seems to
me extremely clear that the greatest gift that
God has given to all of us is that ability, even
though it's limited-- I can't choose to run
at 50 miles an hour, I just can't choose to do that-- but it's the ability to
say yes or no, particularly to other people
and to God himself. Because that's where
love comes from. If my wife was a robot
and she came home, and I pressed the
button marked kiss, and she gave me a clanky,
technological kiss, it wouldn't be very
thrilling, would it? There'd be no warmth
or humanity to it. The key to human relationships,
and relationships with God, is, of course, that
we're free to choose. And let me say this, one of
the things that convinces me of the truth of Christianity is
that God doesn't browbeat us. He's not a totalitarian God. He gives us free choice. He sends his Son
into the world who shows us what the love of God is
by his dying and rising again. But if we choose to ignore
him, he loves us so much, he'll accept that, even
though it pains him. That's one of the
wonderful things about it, that our relationship
with God crucially depends on us having that
certain degree of freedom. Now, if you're not
satisfied with that, I've written a big book on
it, I'm sorry to tell you. It's called "Determined to
Believe?"-- question mark. And you can find me
in there wrestling with all these questions and
the various biblical statements about them. TICHO TENEV: We're
almost out of time, so I would like to finish with
a question from Landon, which says, thank you for
speaking to us today. How can we best advocate for
moral principles in AI research in the face of growing moral
relativism in our culture? How can we choose
firm standards? JOHN LENNOX: Well, thank you
very much for that question. And it seems to me
that you yourself are aware of firm standards. And what really gets
through to people, I think, is if we
turn out to be, in our workplace, people
that can be trusted, people of moral integrity. Because people notice that. One of the things in our culture
that is at a great premium is the matter of trust. And we have to have so
many lawyers and such great difficulty because
people cannot be trusted. So I feel we need to start
with ourselves and decide what basic moral principles
we are committed to. The Ten Commandments is a
very good place to start, and the teaching of Jesus
and the apostles and the New Testament. And as Christians,
God will give us the power to live
by those principles, not that we don't fail,
but we can seek his help in living in society. And if we do so, I believe he
will give us real opportunity to then discuss why we
believe those principles. I find, in life, the
important thing is not to preach at people
but to be friendly, to ask them questions
about these big issues, and ask them where they
get their values from and what their values are, and
get them out into the open. And I discover, like Socrates,
if you ask people questions, they'll soon ask
you some questions. And that gives
you an opportunity to enter into the debate. But doing it gently and
with respect, you will never lose faith by doing that. And I try to do it
in my own little way. And I know the difficulty
in the relativistic culture. But it's good to be armed
with some good arguments. And nobody believes that
all morality is relative. If they tell you that,
kick them on the toe and see what happens. And if they say you shouldn't
do that, you say to them, oh, but I thought all
morality is relative. I thought I'd enjoy doing that. And no one believes that all
truth is relative, especially when they go to the bank
manager to try and borrow a sum of money. A friend of mine once
said that people only think things are relative
when they regard them as of very little importance. And that can help
you carve a pathway through all these questions. But I used to think
that I'd solve all these big questions
when I was 30, and then I'd begin to live. And somebody told me-- and
I've never forgotten it because it's so useful-- they said, you've got it wrong. Solving the questions is living. So instead of regarding
these things as big problems to be solved and then we
get on with living, we can-- and especially if
we're Christians, and this is where we can really
put our faith in God to work-- we can realize that
maturing as people, as individuals, and
being good workers in AI or anywhere else, can be
achieved and be very satisfying when we discover that God
gives us his strength, even though we fail so often. The wonderful thing,
to my mind, is God accepts me, not dependent
on what I've done or achieved but because of what
Christ has done. And that sets me free to live. TICHO TENEV: Absolutely. Well, thank you very
much, Professor Lennox, for a wonderful and
enlightening talk. And actually, I also want
to thank our audience for great questions. And with this, we must conclude
the talks for this time. [MUSIC PLAYING]