>> Welcome everyone to an Authors at Google
talk. We are very pleased to have Sam Harris here. And to introduce him, we have Jed Salazar. Thank you. >>Jed Salazar: Hi everyone. I'd like to welcome Sam Harris to Google Santa
Monica. And to give you a little bit of background
about Sam Harris. Sam Harris has written two bestselling books
-- "The End of Faith" and "A Letter to a Christian Nation". Sam has appeared on, has appeared on countless
shows and has written many publications. He received his degree from Stanford in philosophy,
has studied religion for many years, and most recently has received his PH.D in neuroscience
from UCLA. Sam is also a co-founder and CEO of the Reason
Project which is a non-profit foundation devoted to spreading scientific knowledge and secular
values in society. And without further ado, I'd like to welcome
Sam. [Applause] >>Sam Harris: Well thank you Jed, and thank
you for the invitation to speak here. It's an honor to speak here. I'm gonna be giving a talk I've never given
before. So you will have the luxury of telling me
whether I made any sense at the end. Hopefully I'll leave a good chunk of time
for a conversation. I'm gonna speak about human values and about
morality and how we can understand these scientifically and in fact not only understand descriptively
what people are doing in the name of morality and human values, but actually come up with
scientific answers to moral questions. It's widely believed that there is no way
of doing this. That the most important questions in human
life -- like what to live for and what to die for and what constitutes a good life -- are
by definition outside the purview of scientific objectivity. And so I'm gonna try to give you a framework
for seeing that that's not so. But it's believed that, that facts and values
are distinct and dissimilar kinds of things -- that our talk about one does not translate
into talk about the other. There's no description of the way the world
is that can get you to a description of the way the world ought to be. And that's, we have David Hume and G. E. Moore
and Karl Popper in philosophy telling us that that's so. And most scientists have simply swallowed
that philosophy whole. I'm gonna argue that that's not true and that
it's crucial that we see that it's not true. Because it seems to me the only way we can
get to a world in which we converge on the same kinds of moral, environmental, political,
social solutions to global problems -- the only way we can get there is to have some
kind of universal framework for talking about right and wrong and good and evil. And what we've been left with, what this fragmentation
of our discourse has given us, is that it has delivered us into a world where the only
people who claim to be moral experts -- indeed the only people who claim that there's such
a thing as moral expertise are religious demagogues of one or another flavor. And this has shattered our world into these
separate moral communities where there is just no, there's nothing for a fundamentalist
Christian to say to a Islamist to bridge their mutually incompatible world views. And so I wanted -- I'll talk for a minute
about why religion can't be the repository of our moral wisdom and our notion of the
good life. But, the crucial bit to take on board at this
point is that, it's only religion that is saying that there are right and wrong answers
to moral questions. And the scientific community has more or less
said, "You're right. Science and reason can never give you a universal
framework for moral questions." Now we should have been able to see that religion
wasn't gonna give us a universal framework. Because as Bertrand Russell pointed out over
a century ago, there's such a bewildering number of religions on offer, making mutually
incompatible claims about the nature of reality and how to live within it that even if we
knew one of our religions was perfectly true, even if we knew this was God's multiple choice
exam, is it A) Hinduism B) Buddhism C) Shamanism. There are so many religions that every believer
should expect damnation purely as a matter of probability. So it seems to me that should end the argument. It hasn't, but in any case, Russell had that
right. Another way to see that religion isn't tracking
reality as it is, is this is a world map of religious denominations. You can see that this is not the way genuine
knowledge should be partitioned in our world. It shouldn't follow national or political
boundaries. Take India as an example. India, Nepal, the main places on earth where
they seem to have discovered that there's not just one god of Abraham to worry about,
but there's a multiplicity of god. There's thousands and thousands of Gods. What are the chances that they, among all
the earth's people, have discovered that the elephant headed God Ganesh really exists and
needs to be propitiated? Does anyone think this is the way human knowledge
is developing? I don't think so. In any case, the contradictions between faiths
are only one of the problems. And within any faith, you have impressive
patterns of contradictions. So this is an image of contradictions within
the Bible – both the Old and New Testament. And every arc is a verse that contradicts
another verse and the grey bars, or the depth of the grey bars indicate the number of verses
in each chapter of each book. And these are real, these are deal breaking
contradictions. These are John the Baptist was in prison when
Jesus went into Galilee, John the Baptist was not in prison when Jesus went to Galilee. These are moments when the text refutes itself. And then there's the inconvenient fact that
some of the easiest moral questions that we have ever had to solve religion gets wrong. The Bible and the Koran both support slavery. There's absolutely no question that theology
was on the side of the slave holders during our long effort to get rid of slavery as Jefferson
Davis, the President of the Confederacy pointed out.
Slavery was established by the decree of Almighty God -- it's sanctioned in the bible in both
testaments from Genesis to Revelation. This is true. There's no one of any flavor of Christian,
Muslim or Jew who can deny that fact intelligibly. And so it seems to me that if the easiest
and most significant moral questions are not solved by these scriptural traditions, in
fact are where we get the wrong answer, from these traditions. We have to recognize that our moral wisdom
is not coming from these texts and when you go to the texts and pick and choose the wise
bits, as you have to, given what's in them, when you notice for instance that the Golden
Rule is a very wise moral precept and we should take that on board, and then you notice that
there are other rules, like if a woman is not a virgin on her wedding night, she should
be stoned to death on her father's doorstep. That rule hasn't aged very well. That process of picking and choosing is clearly
something that we bring to the text. This is not -- we're not getting it from the
text, we are having to bowdlerize the text based on our own moral intuitions and based
on a larger conversation about the nature of human flourishing and human well being. You can also see that this gap between facts
and values looks a little suspicious when you look at how we talk about facts and values. We talk in terms of belief. We believe things about the nature of reality. We make assertions about what is so in the
world. We make assertions about facts, we form scientific
beliefs and this includes history and journalism and any other type of conversation where we're
saying how the world is. But we also form beliefs about values. We talk about morality and meaning and spiritual
principles and it's often thought that these are radically different acts. But I have thought for a long time that belief
is in some sense content neutral. So that to make a claim about say chemistry,
water is two parts hydrogen and one part oxygen is very much like making a claim in ethics
-- like it's good to be kind to children. This may not, this is certainly not intuitive
for most people, but this is the way it has seemed to me for quite some time. And we did some FMRI work which seems to have
borne this out. We've put people in the scanner. We gave them propositions to read to judge
either true or false and then we compared belief to disbelief. And on the left you have all stimuli which
gave us this very localized region of signal in the ventromedial pre-frontal cortex and
then we were able to break out some of our individual categories of belief. And here I break out mathematics and ethics
which are perhaps the most different kinds of stimuli. Mathematics was just mathematical equations
that were true or false: 2+2=4 versus 2+2=5. Ethics was statements like it's good to be
kind to children versus it's good to beat your children. And while the overlap isn't perfect, it's
by the standards of neuroimaging, it's quite close. And I'm reasonably confident that we can say
that the main reporter of belief in the brain, this region of the frontal lobe is content
independent. And this region's involved in self-representation
and reward. And so when you make judgments of self-relevance,
this is the region you get in other studies. And so I think of belief as a kind of extension
of the self. When you believe a proposition, when someone
says something and you think, "Yeah that's true", you are in some sense taking it in
hand as part of your cognitive emotional repertoire. You're saying, "Yes I can use this, this is
gonna inform my emotion and behavior." There was a question after we did this study
whether religious belief was distinct. And so we ran another study. We used religion in that first study, but
we weren't able to break out the data. So we did a study just on religious versus
ordinary belief with -- and this time we selected believers and non-believers. So we had two separate groups of subjects
and we gave them very simple statements to read. The biblical God really exists versus the
biblical god is a myth. And so the atheists in our study would answer
the factual question the same way as the Christians, and vice-versa. But they would be diametrically opposed on
religious statements. And we essentially got the same result. This region of the ventromedial pre-frontal
cortex is apparently the reporter of belief in both Christians and non-believers on both
religious and non-religious subjects, topics. So what I propose to you is that belief is
something -- is a way that we attempt to map our thoughts onto reality, whatever reality
is altogether. And where we seem to succeed in this process,
we call it knowledge, where our talk about reality functions in such a way that it's
reliable that it becomes a guide to the future, where we have significant consensus, that
we're making sense. We're calling this knowledge. There are other obviously a lot of our beliefs
don't map onto reality in any, with any kind of fluidity. And so we -- they're false. You can see there's a region where beliefs
are mapping on, but we don't call it knowledge, and that's all of the beliefs people have
about the world that are right essentially by accident. It's possible to believe things for bad reasons,
believe true things for bad reasons. In any case, that's just to say that religion
doesn't have some special corner on the market of value based talk. And there are many reasons to think that it
is not the best source of value talk. But the thing that religious people are right
about, even the bible thumpers and the jihadists of the world, people who we might be critical
of in all other respects, they're right to think that we need a universal morality. And it's long been obvious that we need a
universal morality. In the immediate aftermath of World War II,
the UN tried to put forward a universal declaration of human rights. And the American Anthropological association
in all its wisdom said, "This is a fool's errand. There is no universal declaration
of human rights because any effort to make a universal notion of human value is merely
to foist your provincial white colonial merely local version of the truth onto the rest of
humanity. It has no intellectual legitimacy, this project." So, but please notice this is the best our
social sciences could do. Essentially with the crematory of Auschwitz
still smoking and this was in 1947. So how have we gotten here? Well, it seems to me we have a double standard
in how we treat differences of opinion in the moral sphere. So you confront moral difference. For instance, a difference of opinion between
someone like the Dalai Lama and someone like Ted Bundy, the Dalai Lama wakes up every morning
thinking that maximizing compassion and helping other people is an integral part of human
happiness and this is what his attention is purposed toward for the most part. Then you have someone like Ted Bundy who woke
up every morning trying to think of which young woman he was gonna abduct and rape and
torture and kill -- and I think he killed something like 28 of them. A difference of opinion. I mean these are irreconcilable moral views. Many people take this disparity to suggest
that there is no ground truth. There's nothing Ted Bundy can be wrong about
and that the Dalai Lama can be right about. That is -- that gives us any kind of moral
bedrock. They have a difference of opinion obviously. He likes chocolate, he likes vanilla but no
one is wrong in any deep sense that cuts to the nature or reality. Now notice that we don't do this in Science. You take Physics, for example. On the left you have Edward Witten -- he's
a real physicist's physicist. He's -- if you ask the smartest physicist's
around, who's the smartest physicist around? Half of them in my experience will tell you
it's Ed Witten. The other half will tell you they don't like
the question. In any case, Ed Witten's one of the patriarchs
of string theory. He thinks it's the greatest thing since sliced
bread. What would happen if I showed up at a physics
conference and said string theory is bogus? It's not my cup of tea. It doesn't resonate with me, it's not how
I choose to view the universe at the smallest scale. Well nothing would happen because no one would
care and that's just the point. My opinion does not count. I'm not adequate to the conversation about
string theory. I don't have the mathematical expertise. I don't understand string theory. And this is what it is to have a domain of
expertise. Certain opinions can't count. We have convinced ourselves somehow that in
the moral domain, everyone gets a vote, that Epistemology has to run by democratic principles. Every opinion counts equally. There's no such thing as moral expertise. There's no such thing as moral talent. There's no such thing as moral genius. So I would argue that that's almost certainly
untrue. Another way we've gotten here is we look at
certain moral dilemmas. How many of you have seen the trolley problem? This is kind of ubiquitous in philosophy and
psychology at the moment. So there's a trolley, a runaway car coming
down the track. If you do nothing it's going to hit 5 workmen
on the track. But you stand at the switch, and you can throw
the switch diverting it so that it will only hit one workman, saving a net 4 lives. How many of you think it would be a good thing
to do to throw this switch? You're gonna save -- it's either 5 or 1, someone's
gonna die. Well, most people when you test this, something
like 90% of people think that yes, you have a moral obligation to throw the switch. But when you describe it under another guise,
now you stand at a -- on a foot bridge. The same car is coming down the track destined
to hit 5 people. But you are beside a suitably large person
whom you can physically push into the path of the oncoming trolley. And he will die, but he will stop the trolley. Now I happen to think this is not so well
posed. We all have an intuitive physics which causes
us to burn a lot of fuel wondering whether he's really gonna stop the trolley. But if you stipulate that he will stop the
trolley, it still feels like a different problem. This pushes our intuitions around. People come away from this dilemma thinking
there's no there there. There's no way, you frame it one way, 90 people,
90% of people say yes. IF you frame it another way, 90% of people
say no. There's no ground truth. Now notice that we don't do this when we confront
logical dilemmas. How many of you know the Monty Hall problem? How many of you know that you know the right
answer to the Monty Hall problem? OK, there's a few. You're on a game show, you've got three doors. Behind one is a new car, behind the other
two are goats. You pick door number one, and your host opens
door number 2 revealing a goat. He now gives you a choice to switch to door
number 3. So you pick door number 1, you can switch
to door number 3. How many of you think you should switch, that
it's wise to switch? How many of you think it's 50/50 and there's
no reason to switch? OK, well you should switch, and many people
don't see that you should switch and even when it's explained to them over and over
again. Do you have an answer to this? >> [Inaudible] >>Sam: What was that? >> [Inaudible] >>Sam: Right. >>[Inaudible] Sam: Well this is, in this case he's clearly
not going to reveal the car with three doors, and he's only revealed a goat. >> [Inaudible] >>Sam: OK, but he hasn't [Inaudible] >> [Inaudible] [Laughter] >>Sam: With those stipulations, in this case,
let's assume those are valid. People -- when you explain this problem to
people, people are logically dumbfounded. We have a very strong intuition that with
two doors remaining, why would you switch? There's a car behind one, there's a goat behind
another -- we know this. Two doors are closed. You have a 50/50 chance that your first pick
was right. Well you don't you have a one in three chance. And switching gets you to 2/3. This is actually easier to see if you imagine
1,000 doors. And you pick door number 1 and then Monty
Hell invalidates 998 doors leaving door 576 that you have never thought of. Should you switch to door 576? Well, you had significant uncertainty when
you made your initial choice. The chance was 1/1000 that you pick the right
door. Here that remaining probability of 999 out
of 1000, that collapses on the one remaining door. It's obvious that you should switch. But the point is for many people it's not
obvious, even when they've had it explained to them in terms of probability, they can
fall back into this intuition of why, why you switch. We don't leave this thinking -- well so there's
no right answer to the Monty Hall problem. There's no such thing as logical high ground. This is not a domain where we can have objective
knowledge. So to give you a framework for thinking about
how we can have objective knowledge about human values. It seems to me it arrives rather easily the
moment you realize that human values, or values of any kind reduce to a certain form of fact. They reduce to facts about the experience
of conscious beings, anything that can have happiness or suffering, anything that can
experience value on any level. So, why is it when you see a piece of broken
glass, you don't feel compassion, you don't worry that there's some terrible suffering
involved? Because you don't think there's anything we
can do to glass to make it suffer. You don't think that's a domain of experience. And if we care more about our fellow primates
than we care about insects, which we do, it's because we've drawn analogies, based on their
behavior and their underlying neurology such that we think they -- primates experience
a broader range of possible happiness and suffering than insects. Now the important point here is that this
is a factual claim. This is a claim about which we can be right
or wrong. It's possible that we have misconstrued the
neurology of ants, or we've misconstrued the relationship between physical complexity and
the possibilities of experience. And if we've misconstrued those things, then
maybe we'll have to revise our notion of possible ant value. But again, the cash value of value is in terms
of changes in conscious experience, actual or potential changes in conscious experience. And this is true, even if your values are
focused on another life. Even if you think that after death you're
either gonna be consigned to some kind of paradise for eternity or you're gonna wind
up in hell for eternity. Again, the thing you're worried about is the
experience of anything that can suffer an eternity of either kind. Obviously it wouldn't be realized at the level
of the brain in this case. But whatever is doing the knowing is the thing
you're worried about. And clearly there's a continuum of human experience
to speak exclusively about people now this side of the grave. There's a continuum of experience that we
recognize and movement on this continuum is fact based. There are right and wrong answers to how to
move. We know that you can live in a condition where
basically everything that can go wrong does go wrong. You can live in a failed state where it's
impossible to feed your children, where you can't reasonably form an expectation of collaborating
with a stranger because it's essentially a war of all against all. And we know it's possible to move rightward
in this -- on this continuum to something far more idyllic, something far more like
the kinds of lives we live where we have the freedom to have -- general freedom from violence,
freedom to use our time, to get educated, to pursue various interests, to enjoy our
lives. And no doubt this continuum extends further
in both directions. There are greater possibilities of human happiness,
and greater possibility of human misery than any of us have visualized, very likely. And there are many levels of analysis for
this continuum. There's a level certainly to talk about the
human genome and biochemistry and molecular biology, especially given that fact that we
are poised to meddle with our own genomes. Any changes we make relevant to the possibilities
of experiencing human well being are morally salient. And then in a much higher level of resolution,
we can talk about economic systems and political understandings and laws that govern financial
institutions. All of this materially affects human well
being and there are right and wrong answers to how those things will have consequences
in our lives. But the moment you're talking about human
well being, you are of necessity talking about changes in states and the function of the
human brain. So I would argue the mind sciences have a
kind of a privileged role to play here and that morality at some level is an undeveloped
branch of neuroscience and psychology and the sciences that treat our experience. Because any change in our experience is we
know being realized in the brain, and it is impressively constrained by the facts at the
level of the brain. And so what I suggest to you is the moment
you realize that there is a fact space both actual and potential that governs human value
and value of any kind, then I'm asking you to visualize what I'm calling a moral landscape
that has peaks and valleys where different possible ways of being are realized. And, one thing to notice is that there are
many peaks, very likely. There are probably many ways to be more or
less equivalently happy. There are probably many ways to organize a
human community that could be quite distinct, but nonetheless, allow for the same kind of
human flourishing. Now, why isn't this a problem? Why doesn't this erode any sort of objectivity
here? Well, think of how we think about -- and again
there are many different, obviously not just two, there are many different peaks. Think of how we think about food. No one would ever be tempted to argue that
there's one right food to eat. But there is a right or wrong answer to the
question of is this healthy food. There's a real distinction between food and
poison. There are many, many things we can eat that
are healthy to eat, that are appropriately called food. There are exceptions here. Some people are allergic to peanuts and will
die if they eat peanuts. But we can understand all of this within a
rational discussion about chemistry and human biology. And no one would ever -- the fact that the
set of all things that are food is still essentially open ended and that never tempts someone to
say there are no right and wrong answers to questions of human nutrition. So to with a, you can throw out an analogy
to a game like chess. It bothers people that certain moral precepts
admit of exceptions. So you take a precept like don't lie. Right, is don't lie a good moral precept? It's right most of the time, say. But there are exceptions. And people take this, the fact of the exceptions
to suggest, "Well then there is no real objective morality regarding lying." Well, don't lose your queen is a good precept
to take in chess. If you want to play winning chess, it's something
to keep in mind certainly most of the time. But obviously there are exceptions. There are moments where losing your queen
is the only good move, or a brilliant move. Chess is a domain of absolute objectivity. We could in principle if not in practice diagram
every possibly chess game. And it is true to say that a move is a good
move or a bad move in chess which brings us to moments of moral diversity. We are in a world where we must confront different
answers to questions of morality. Not everyone sees that don't lose your queen
is a good principle in this particular chess game. So you have someone like Sayyid Qutub, every
Jihadist's favorite philosopher. Certainly Osama Bin Laden's favorite philosopher. He lived in the United States in for six months
in 1949 in Greely, Colorado, and formed a lasting impression of American culture. He wrote that "the American girl is well acquainted
with her body's seductive capacity. She knows it lies in the face, and in expressive
eyes and thirsty lips. She knows seductiveness lies in the round
breasts, the full buttocks and in the shapely thighs, sleek legs. She shows all of this and does not hide it." I mean it seems to me never before have we
had one man's sexual frustrations so obviously informing his philosophy. And he is reported to have died a virgin. In any case, looking at images of the time,
we can feel his pain. But, this is the genius that has given us
this present instance of moral diversity. And this is to take one variable among many. What to do with women's sexuality, the problem,
the great problem of female sexuality. This is one answer to that question. This is obviously reasonably common throughout
the Muslim world. This is an instance in Iraq among Shiites. When you think of morality in terms of human
well being, when you think of values in terms of human well being, you can ask yourself,
what are the chances that this is a -- represents a peak on the moral landscape? What are the chances that this is a good way
to maximize human flourishing? Notice this is not what we do in Western academic
circles and intellectual circles at the moment. I can assure you that if you go to a scientific
conference, and you say something derogatory about this, you have staked out a very edgy
position from the point of view of secular western intellectual life. You have made a very controversial statement. It is widely believed, as far as I can tell
universally believed in academic circles that while we may not like this, while we might
want to say this is wrong in Boston or Palo Alto, who are we to say that the proud denizens
of an ancient culture can't force their wives and daughters to live in cloth bags? Who are we to say it's wrong to beat them,
or throw battery acid in their faces or kill them if they decline the privilege of living
like this? I mean I can't tell you what sort of bizarre
collisions I've had in academic circles saying something derogatory about life under the
Taliban. In any case, to say, to notice rather obviously
that this is not a way to maximize human well being is not to say that we in our own culture
have struck the perfect balance. That's not entailed. This is what it's like to go to a newsstand
these days. It may for some of the guys in the room, it
might require a degree in philosophy to figure out exactly what's wrong with this. [Laughter] But happily I have one. In any case, this is, have we struck the perfect
balance in our society? Is this the perfect expression of psychological
health with respect to the variable of female sexuality and youth and beauty? Perhaps not. Ok, there's a continuum here, again with respect
to only one variable where maybe we can find a place on this spectrum that represents greater
balance, where little girls and little boys can grow up sort of less confounded by the
prospect of becoming sexual adults. My point is, clearly the left is not the right
answer. I mean you ask yourself questions like is
this – is compulsory veiling a good way to raise confident and contented women? Does it raise more compassionate men? Does it improve the relationships between
boys and their mothers or girls and their fathers? I think any reasonable person, not confounded
by religious dogmatism would say very likely not. But we know that our moral intuitions are
actually not infallible. We know that they're prone to illusions. And I'll give you one instance of very clear
moral illusion. And this is why we need a scientific approach
to morality. We need to get behind our moral illusion. So if I asked you how much you would help
-- how much money you would give a child in need and this is based on the work of Paul
Slovic who ran this experiment. People will give something near the limit
of their generosity. If I asked you how much compassion you feel,
you will express based on self report something near the limit of your compassion. If I asked you how much you would give to
help another child in need, now the girl's brother. Again, you'd give the same amount, and you
would self report the same level of compassion. But if I asked you in another circumstance
how much you would help to give -- how much you would give to children in need, both your
self report of compassion and your material generosity diminishes by about 20 to 25%. Now this is clearly non-normative. And if you care about a little girl, and you
care about her brother, you should care at least as much about the two of them. Your altruism should in some sense be additive. And it's not, it's actually quite the opposite. And the more you add, the more altruism diminishes. So that when you add enough, it just goes
to the floor. And this accounts for what Slovic has termed
'genocide neglect' which is something we're all familiar with. It's very difficult to care about a genocide. Genocides are boring for some reason. I mean the biggest problems in human life
when you hear that a hundred thousand or two hundred thousand or a million people were
hacked to death with machetes in Rwanda, that barely makes the news. And yet, on the left, you may not be familiar
with this image – it's about 20 years old -- but baby Jessica fell down a well, her
rescue dominated the news 24 hours a day until she was rescued. It was like four days of pulling her out of
this well. There was not a person who owned a television
who was watching anything else. So, this is literally an instance where a
cat stuck in a tree can trump the needless suffering and death of millions, based on
how we allocate our attentional resources. And so we are, we're clearly not well equipped
to pay attention to the problems that we know actually affect most lives in the greatest
ways. And so we need to find some way of getting
behind our failures of intuition. And we know, we have, this again does not
erode the objectivity of the moral space because we have very obvious failures in intuition
elsewhere. I mean this is just a visual illusion which
should work for most of you. But if you are normally sighted, you will
almost certainly see the tower on the right to be leaning further on the right than the
tower on the left. But these are identical photos. It's a visual illusion. The existence of visual illusions allows us
to understand something about our visual system. The existence of moral illusions I would argue
should allow us to understand something about our judgments of value. Clearly when it's important to see the world
correctly, we manage to work around the limits of our visual system. Clearly we need to find some way of legislating
and developing social and political mechanisms that enshrines our better judgment and our
real moral wisdom, and leaves us no longer vulnerable to our moment to moment failures
of moral intuition. And so in closing, I would just say that it
seems to me that one of the critical things we need to do now as a species really is come
up with a way of talking about the most important questions in human life, talking about the
space of morality and human values in a way that truly transcends culture. So that just as there's no such thing as Christian
physics and Muslim algebra, there's no such thing as Christian and Muslim morality. We're just talking about human flourishing,
and all the variables that influence it. Thank you very much. [Applause] [pause] Jed: If we have questions, I'm sure Sam would
be happy to answer them. Sam: Do you want me to call on people or? Jed: Sure. >> Hey, thanks so much for coming out. I thought you did a really good job showing
how we can use more scientific method to maximize a value function like with your moral landscape. However, I think you completely dodged the
issue of where do you get that value function from. You seemed… >>Sam: Right >> to assume that maximizing happiness for
the most people is the value function. And I think, I mean I'm not disagreeing with
that, but I think that you've dodged the philosophical underpinning that… >>Sam: Right right. >> that it seemed like you were going to address. >>Sam: Well it's -- there are many wrinkles
there as you suggest. It's not obvious how you aggregate happiness
claims and whose opinion trumps others. How do you compare the one person's headache,
or the headache of a million people to the broken arms of five people, say? There's mysteries there as to how you, what
trumps what. There's also, when you talk about population
ethics, there are real problems in how do you just add up utility function so that -- if
you were gonna say, I don't know if you're familiar with the work of Derek Parfit, the
philosopher. But he's done some very brilliant work on
just the paradoxes that trying to aggregate utility coughs up. For instance, if you're gonna talk about just
positive experience, if that's -- if just more positive experience is the way you want
to -- what you want to privilege, then you should prefer a world of trillions of beings
that have lives that are only barely worth living, so a world of seven billion of us
living in perfect ecstasy. Because there's just more positivity net. So that clearly doesn't work. So then people want to say, well there's -- maybe
you want average happiness. You want to raise the average. Well if all you want to do is raise the average,
then you should kill all the unhappy people tonight. And maybe kill everyone except the one happiest
person, and then you will have raised the average, and there's just one happy person
left. Clearly average is not the right metric. So there are issues here. My point however is that -- and this probably
speaks to the core of your questions -- we can't get confused between answers in practice
and answers in principle. Just because there may not always be clear
ways to resolve these issues in practice doesn't mean there aren't right answers. If I asked you how many people on earth were
bitten by a mosquito in the last 60 seconds, it is obvious we don't have the answer to
that question, and we will never have the answer to that question. It's also obvious the question is well posed
and has a simple numerical answer. So there's a difference between there just
being no facts there to be known and there's -- it's just being hard to know them. >> I think I -- maybe I mis-phrased. The, I think the core you missed is you tried
to put -- posit this as an alternative to values based on a religion and morals based
on it where… >>Sam: Right >> the value system is handed down from on
high and these things are good and these things are bad. However, you seem to just posit that maximum
happiness, if that were measurable is the value we should be seeking to achieve. >>Sam: Well one thing I said. >> And what's your -- what -- philosophically
if you're trying to establish morality without a higher being, what is your basis for saying
that the value function is maximum -- like in the game of chess. >>Sam: Right. >> We can solve the game of chess given that
you don't want to be checkmated. >>Sam: Right. >> However, if you don't have that goal, like
that goal has to be assumed. So you have to assume that happiness is good
for people or is the good that should be achieved. Sam: Well no, I think that's a, that's a good
question. That actually speaks to this notion of a naturalistic
fallacy. G. E. Moore, this philosopher, gave us this
idea of a naturalistic fallacy. He said that whenever you attempt to find
good in the world as a kind of natural property, it's always open to this further question,
well is that really good? So what you're saying to me is I want to maximize
human happiness. There's a posi -- there's a way to look at
that and stand outside and say, but is maximizing human happiness really good and that's called
Moore's open question argument. I think the moment you see that, when you
unpack what is actually being said there, what that doubt actually means, I think it's
clear that you are talking about -- that it doesn't work for a well being, that it doesn't
work for the well being of conscious creatures. 'Cause what you're asking is if I say maximizing
well being is the basis for good and you say, but is that really good -- , what you're really
asking is -- is that instance of well being obstructive of some deeper well being that
you don't know about. And so my value function is truly open-ended. The challenge is not to -- I mean well being
is like health. It's a loose concept that is nonetheless and
indispensible concept. >> [Inaudible] >>Sam: No I'm saying that whatever well being
is altogether. Ok let's say there are frontiers of well being
we haven't discovered, as I think there are. >> You're assuming though. >>Sam: I'm assuming that there's nothing >> [Inaudible] >>Sam: Well, one thing I'm noticing is that
anyone who says they have an alternate version of value, their version is always parasitic
on some notion of well being anyway. So the jihadist who blows himself up in a
crowd of infidels, right? That seems to be like the ultimate repudiation
of my way of thinking and what's worse for your well being than strapping on the vest
and blowing yourself up? But when you look at what he's doing, he has
a story, and his story is he's gonna wind up in paradise for all eternity. He's gonna get seventy of his relatives in
there. He's gonna further the Islamification of the
earth. All of these reduced to notions of the good
and the notions of maximizing well being. Getting to paradise and getting your family
there too and helping the right religion spread on the face of the earth is the ultimate way
of trying to safe guard human well being. >>[Inaudible] >>Sam: Well, I've never, I've never encountered
an intelligible alternative. And if you're gonna say, 'Well listen, here's
the -- I've got a black box here which has the alternative, right.' This is a version of value that has nothing
to do with the effect on any possible conscious creature. OK, it has nothing to do with changes in state,
in consciousness now or in the future. But this is the real version of value. It seems to me you have by definition a version
of value that can't be of interest to anyone. >> [Inaudible] >>Sam: I mean, no -- anything that is conscious
can only be interested in possible, actual or possible changes in consciousness for them
or for someone or something else. And if you're gonna say, well I've got this
thing over here that doesn't show up in any of that space actually or possibly. It seems to me that's probably the least interesting
thing in the world, because it can't possibly affect anything that anyone can possibly notice. So the moment you notice it, it's consciousness,
and it's changes. And all I'm saying is, what I've done is I
haven't answered the questions of ethics. I'm not claiming to have said, ok here's what's
right and wrong. I'm just saying here's the direction in which
we can have a truly open- ended conversation where we discover frontiers of human flourishing. And not just human flourishing, the flourishing
of anything that can flourish. So if you guys build a computer that is conscious,
or that we think is conscious, all of a sudden we have an ethical conversation about how
we should treat our computers. If our computers suffer when we turn them
off, then we have ethical obligation with respect to our computers. I don't think anyone is expecting that anytime
soon, In any case, all the hard work is still to
be done is just -- the moment, it seems to me the moment you notice you're talking about
well being, then very different things start to happen. Then you can't argue that gay marriage is
the most important thing we should be talking about in the political space unless you have
an argument that gay marriage really is gonna create immense suffering. No one has that argument. Everyone just says this is wrong, God doesn't
like it so we're gonna burn 90% of our political oxygen talking about that. >>Jed: I think there's a question in the back. >> So yeah, so I guess just to kind of take
a slightly different perspective on this, on the question that was just asked. >>Sam: Yep. >> So the thing about science that's kind
of interesting is it excels in the descriptive. Mainly because usua -- in most sciences they'll
have hard physical sciences that you think of. The prime fitting function, the fitness function
or the, you know, optimizing function is basically kind of impartial. In most cases it's nature itself, right? So all you're trying to do is to discover
what's already there, and you have a really good way of testing which is tested against
the data that you collect from nature… >>Sam: Right. >> And you see if you're right or wrong. It's a natural, naturally given sort of like
a judge of whether you did it wrong or whether you're off base. Similar thing can be seen with a lot of games
like chess, right? Where you have a set of rules that's well
defined and whether you made the right move or not, or whether you won or not, it's not
really up to anybody's judgment, its… >>Sam: Right >> as long as you agree to those rules that's
there. Same thing with mathematics, as long as you
agree on the axioms up front of exactly what kind of a system you're playing with, then
everything is well defined from there on out. Then, you know unless of course there's issues
with the axioms themselves with Godel and stuff. But you know, once you set things up, that
system is self contained and… >>Sam: Right. >> enclosed, right? The problem with ethics it seems to me is
that -- and I think this gets to what he was saying earlier is that you're kind of a -- there's
no single agreed upon universally agreed upon set --like function, ethics function to maximize,
right? So the function itself is sort of like a matter's
kind of question. That's up for grabs, right? >>Sam: Right >> So we'll never really know, or maybe there
is a way to know, but what's to say that if you say OK, well science is going to figure
this out. We're going to say that some notion of well
being that will kind of tease out in the future. What's to say that's any better, or it's not
a religion that's comparable to Christianity or Islam or anything else. Where instead of setting up a god, you set
up a well being function, right? >>Sam: Ok, well. >>And that's the thing that you worship. At the end of the day you're not really, you're
kind of putting up a science, but it's not really fully scientific in the same kind of
a sense. You're using scientific methods, but the principle
on which you're actually trying to operate the machinery of scientific inquiry itself
could be based on a more dogmatic or kind of religious kind of ambiguity. >>Sam: Right, OK, that's a very well expressed
concern. I think you have again fallen into this double
standard that I was trying to expose. Based on intuitions that morality and well
being and value are different from the rest of scientific fact. So again, the only thing I can do is try to
nudge you with analogies and, for instance, I just brought up the analogy to human health. Notice you don't have this intuition about
human health. You wouldn't say well who's to say what human
health is? There is probably your health and my health,
they may be completely incommensurable. Cancer is cancer. It's cancer here, and it's cancer in the highlands
of New Guinea, and it's cancer whether people have heard of cancer. Now it's not that two culturally contingent
conceptions of health don't affect our experience of being sick. It's like cancer having the word cancer mean
so much in this society affects people when they get cancer in a way that it's probably
non-normative. In any case, there's a biology of human health
that we are trying to discover. Granted, it's not like chess, the goal isn't
predefined. So it's open ended, and in fact if we can
mettle with our biology in ways that completely transform the possibilities of physical health,
then physical health is truly undefined. If someone like Aubrey de Grey is right, the
biogerontologist who thinks that death isn't aging, it's just an engineering problem that
admits of a full solution. If he's right, then we should be able to live
indefinitely. And therefore our current conception of health
which is more or less something like if you're 80 years old and can walk around without much
pain, you're healthy. If you can expect to be 80 and walk around
without pain, you're healthy. Well, if Aubrey de Grey is right, we should
be able to jog a marathon at age 1,000. And it's a completely different conception
of health, and yet each one of those conceptions is still objective. It's still, there's still, it's still, we're
still talking about a space of right and wrong answers and scientific understandings of causality. And what I'm saying to you is that forget
about words like morality and ethics, and just talk about psychological and social flourishing. These are facts about -- to speak of humans
only now. They're facts about the human brain. Whatever I do, whatever happens to me affects
my brain. I can only affect you by affecting your brain
in terms of your experience. And there is, this is all fantastically complicated
and culture is involved, but culture again is being run on our brains and affecting our
brains and being instantiated only at the level of the brain if it's showing up at all. So that a more maturing science of the mind,
a maturing science that they can really describe how positive and negative changes happen in
human experience will of necessity discourage right and wrong answer of what is good and
what is bad. Anyway, I hope that addresses the point, it's
not -- this is the subject of my next book, and obviously it's not, I can't deal with
each little wrinkle in the space of an hour, but I hope to over the course of 300 pages. >> My question is… >>Sam: Wait… >> My question now. You're getting variations on the same question
over and over again. You're getting a lot of variations on the
same question. It seems to me you did a really good job of
demonstrating that it would be very wonderful if we could have a universal morality that
we all agreed on. What I didn't see is that you did the same
level of job of convincing me that there is a universal morality that you can put forth
that you can get us all to agree on. For example take the pictures of women you
had. When you've done this at academic conferences,
you've wound up with heated discussions, and I'm guessing you didn't wind up convincing
the other people who disagreed with you that your sense of what was right was the way that
they should be looking at the world. >>Sam: Right. >>Now, speaking personally, when I look at
my own morality, I truly believe I have a built in ability to perceive and react to
morality. And when I ask why, and I look into the research
that exists on it, there's research in evolution on biological underpinnings from evolution
theory as to why I would have that. And indeed, you can find all sorts of research
on altruism, on through kin selection you can find all sorts of reasons… >>Sam: Right, right. >> why we have been built into these response. But the same research that is suggestion causes
for a biological underpinning that causes me to feel moral feelings also explain why
there should be between different social groups xenophobia. There is no scientifically from an evolutionary
perspective -- there's no difference between able to feel morality and xenophobia towards
outsiders. >>Sam: Right. >> They're explained by the same phenomenon. >>Sam: Okay. >>So what I don't see that you're able to
get past that. I don't see you're able to produce a morality
that's going to convince people that people are going to buy into. Sam: OK, great question. Two points I want to pick up on. First you have to distinguish between -- I'm
not arguing that morality is based on evolution. I'm not arguing that our current notion of
human well being and flourishing and all of our thinking about that is in any way tied
to Darwinian principles because clearly most of what we care about now is not. If you're giving your children eye glasses
or wearing sun screen, you're not disposed to live in the world that your genes have
made for you. And there are many things that we have, that
have been selected for like out group violence and xenophobia that clearly are a main obstacle
to human flourishing at this moment. We have to get -- rape could have paid dividends
to our ancestors is a good strategy to get your genes into the next generation. No one's going to argue that rape is therefore
morally necessary. So there's this separation between evolution
and intelligent discussion about human flourishing. And evolution simply can't see what we care
about because we haven't evolved to have conversations like this. We haven't evolved to perfect democracies,
we haven't evolved to build safer airplanes. We've flown the perch that evolution has built
for us. And we have as you say these hard wired judgments
that inform our moral life. People find certain things disgusting. And that kind of disgust circuitry plays into
their moral judgments. The question is given what we are, how do
we maximize human well being? And that question I'm arguing subsumes all
the talk we should be engaging around right and wrong and good and evil. Now I'll tell you the kind of -- your opening
question reminded me of the kind thing I do get at a scientific conference. For instance, talking about compulsory veiling,
I said very much like I've said here, we know that compulsory veiling is not a way of maximizing
human well being. Someone else at the conference, another presenter
at the conference said, "That's just your opinion. How can you prove that?" I said, well I think morality reduces to human
well being and this is obviously not a way of maximizing human well being. She said, "Well it's just your opinion." I said, "Well let's make it easier. Let's say we found a culture that was just
removing the eyeballs of every third child, right. Would you then admit that we had found a culture
that was not maximizing human well being?" And she said, "Well it would depend on why
they were doing it." Now understand this is a person who has a
PhD in biology and a PhD in philosophy and whose area of expertise is on the forensic
use of science and all the ethical issues involved. And she had just given a talk on how troubled
she was that we might be using lie detection technology on captured terrorists because
this would be a violation of cognitive liberty because she had very fine grained moral intuitions
about what is wrong to do to people, right. And I'm asking if we found a culture that
were removing the eyeballs of children, it would depend on why they were doing it. I said, "Well let's say they were doing it
for religious reasons. They have a scripture which says, every third
should walk in darkness or such nonsense." She said, "Well then you could never say that
they're wrong." This is not -- that is not a minority view
in science and academia at this moment. It seems to me we are hamstrung by this, this,
this politically correct dogma which suggests that we have to pretend to know so little
about human well being. We have to pretend to know so little about
how people flourish that we can say absolutely nothing in the face of moral diversity. That all we can do is just take everyone's
word for this is one flavor of morality, this is another flavor. They're all equally viable, they're all equally
dignified. And there's just no way we're ever gonna fuse
our cognitive horizons or our moral horizons on this subject. Whereas we're aspiring to fuse it on every
other subject. We talk about human psychology and genetics
and physics and then it's transcultural and transnational and there's just one space to
talk. But if you're going to talk about human well
being, we know nothing and we'll never know anything in principal. That, it seems to me is just, one it's just
profoundly unlikely to be true given that it's all happening at the level of the brain. And two, it's a recipe for just the continued
shattering of our world. >> That's not what I'm trying to say. I agree with you -- just not at the level. However there's lots of arguments that there
are many people like that. >>Sam: Right. >>You're not going to create the breaking
of universal ground. >>Sam: Well we don't have to convince every
-- first, I know we're out of time, but this is okay. This is great, by the way. I don't take my adversarial stance as anything
other than appreciation, because I love this. [pause] It seems to me that we overstate the lack
of consensus because there actually is a lot of consensus on our most important moral intuitions. I mean you take something like -- so there's
two things. One is consensus doesn’t really matter ultimately
when you're talking about truth. It's possible for everyone to be wrong, it's
possible for one person to be right and to never be recognized. This is true everywhere. It's true in physics, it's true in information
processing and it's gotta be true in questions of human well being. But the truth is we have a consensus in morality
that we don't have elsewhere. So if I walk out on the street and I ask people,
"Do you think the passage of time varies with velocity?" Time slows down the faster you go. Well that's just special relativity, but most
people aren't gonna believe that, right? If I go out there and say, "Do you think human
beings and lobsters have a common ancestor? That's just evolution." But we know how many people don't believe
that, and on a good day, 25% of our neighbors believe that. If I go out there and say, "Do you think it's
good to be kind to strangers? Do you think it's good to tell the truth most
of the time? Do you think it's good to be kind to children? Those are massively well subscribed belief
systems. So what I would argue is that our core moral
intuitions, what moves us is actually quite similar from culture to culture. And the thing that we don't have, and the
thing that, the real challenge is we have -- people are using the same morality, by
and large -- things like veiling notwithstanding. They're using the same kind of morality, but
they've trimmed down their moral circle based on some us and them ideology. So that there's an in group and there's an
out group. There's an in group that they care about to
which their morality applies. So it's good to be kind to these children,
but we can kill these other children because they're not really people. And so this is how you get the mystery of
how under the Third Reich, you have perfectly normal people willing to gas and kill unlimited
numbers of other people in their day job. And they go home and they still love their
children, and they love their pets and they listen to Wagner and they have normal lives. I mean there's no way that all the people
who were involved in the Third Reich were psychopaths. This is the horror of that particular instance. And it's the horror of any time you see mass
numbers of people victimizing other people. But what allows that to happen is this sense
of certain people are not people. So we have to expand the circle. What we don't have to do is somehow convince
people of a radically different morality. Modulate a few things like what to do about
women's sexuality and whether women should be taught to read. There are cultures that are pretty -- have
some strange ideas about how to best set up their societies. But, for the most part, nobody thinks that
being terrible to your in group is what constitutes a moral life. So we need to broaden the in group. >> So a question that I would have. Well first of all a statement. And that is it seems that you share some common
ground about maybe some of the dangers of moral relativism with the current Catholic
pope. I don't know if anyone's suggested that to
you before. >>Sam: Well no, but I actually suggested that
early on. The irony is that the people who -- the only
people who agree with me and think there are right answers to moral questions are the religious
demagogues who think they have those answers because they got them from a voice in a whirlwind. I mean that's the -- and so I would argue
they're right for the wrong reason. >> So I mean, I guess a question is is if
we, if the goal of your talk and maybe the goal of I guess your mission is really – It seems that it's something along the lines
of is it possible to get people more engaged with having like an active kind of moral I
guess inner dialogue. And then externalize that and have it with
others. And that's going to be at least some portion
of what you want to do. >>Sam: Yeah, well I want to break down the
illusion that there is actually no dialogue to have that can lean anywhere worth going. And that's we have this sense that we just
have to respect and tolerate difference, radical difference here. In fact we have to tolerate intolerance. We have to tolerate violent intolerance. So you know, if a cartoon controversy, there's
riots. People rioting by hundreds of thousands over
the earth. People are dying. The cartoonist Kurt Westergaard, I don't know
if you know this in Denmark. The guy who drew the most provocative of the
cartoons -- provo -- not provocative at all, it was a guy with a bomb shaped turban. But, you know, he's being hunted in his own
country. A guy showed up in his living room with an
ax the other night. And literally every person with the name Kurt
Westergaard in Denmark, there's like 87 of them now need round the clock protection. We are very patient with this kind of conception
of differing conception of what constitutes morality. There are these anti-blasphemy laws trying
to make their way through the UN at the moment. So it's going to be illegal, someone like
me could be -- get a knock on the door for the unpleasant things he says about religion. It seems to me, we need to, the moment we
start talking about how human beings really flourish, we can cut through a lot of this. The moment we get, we're no longer confused
by words like morality and we just simply talk about psychological health, social health,
physical health et cetera. >> Yeah, I had a question. I recently read a book by Edward O. Wilson
– 'Consilience', where he made the argument that science needs to encroach on the areas
of arts and some other areas in curriculum. And the scientific approach can be applied
to those things. And I'm wondering if the inevitable belief
or goal is that science will basically be intrinsic to every avenue of human life. Be it something like you were talking about
now like ethics, to something that today exists completely devoid or outside of science. Will that inevitably be rolled under the blanket
of science? And will we have answers for you know the
things that we don't think that we can right now, through science? >>Sam: Yeah, yeah, well, it's not that we're
always gonna consult the scientist or scan our brains to make, to decide whether we want
to go get some ice cream. The desire for ice cream is something that's
happening at the level of the brain. We can understand it with greater precision
at the level of the brain. Even if we understood it perfectly, that's
still not going to change our experience necessarily of having ice cream or wanting ice cream or
getting ice cream. So there's practically speaking at the level
of how we live and fall in love and what attracts our attention in any moment a complete scientific
understanding of all those processes, should it be available. Isn't it going to just keep us looking at
brain scans and keep us out of living our ordinary lives? That's not the way science works. I think there are interesting ways in which
understanding things scientifically could change our actual experience. So, for instance, you know if [sigh]. I mean we have this, we use words like "love"
say in many different contexts. And we say, "I love my wife, I love my child,
I love my dog, I potentially love other people who I haven't met yet, I love ice cream." We have different shades of meaning there. Now, if we understood all of these -- if I
could put you in the perfect brain scanner and interrogated your brain as you went through
all those different experiences, we might find that some of those experiences are quite
similar or quite different and line up in ways with other experiences that would be
rather counter intuitive. And so that maybe we're using the wrong word. And maybe were actually not noticing what
we really are feeling in certain circumstances. And this is not a brain scan analogy, but
for instance people use words like embarrassment and humiliation almost interchangeably. But when there's been some work done on embarrassment
and humiliation, these social emotions, and it turns out there's a difference between
embarrassment and humiliation. If I told you a story about a prior embarrassment
of mine right now. It's gonna be a story that's gonna make you
laugh very likely. If I tell you a story where I was humiliated,
genuinely humiliated, you might laugh, but you're gonna be uncomfortable and you're gonna
be ready to change the subject. Now that's a shade of difference. This is a way in which our social emotions
kind of translate into discursive inter-subjective space. The point is, this is something you might
not know anything about, but it's still true. So you could have gone through your whole
life telling embarrassing stories, and telling humiliating stories and sort of being vaguely
aware of a difference in your audience, and in you, but thinking the embarrassment and
humiliation were the same thing. And so, I think we could become much better
observers of our inner lives with new concepts. And that doesn't mean that science somehow
is gonna be this -- overwhelm every other kind of talk where we'll talk in terms of
neurotransmitters and not use words like love and happiness. >> I was thinking more of like a ubiquitous
accomplice to some of these other understandings. Science just always having some input in these
other areas. >>Sam: Yeah, I'm a fan of the notion of Consilience. I don't think that book has necessarily aged
so well in every regard. But the notion that there are actually no
real boundaries between knowledge domains, I agree with. I think the boundaries are there by virtue
of bookkeeping and university architecture and just shortages of time. I mean the fact that you can't -- it takes
too much time to specialize in any one area, to specialize in every area. And knowledge is doubling every three years
or five years in the sciences. So if you knew everything today, three to
five years from now you know exactly half of everything. So that aside I think, there is just one space
of facts to know and ways to talk about them and beliefs to have them. Yeah? >> Does anyone on VC have any questions, anyone
left? No. All right, great, well thanks a lot Sam. Sam: Yeah, thank you. [Applause]
Highly recommend this talk. Sam gives a great explanation of the Monty Hall problem in it.
I imagine the Googlers would protest him now if he were to speak there today
This is one of my favorite Sam talks. He really lays out the case for how science can inform morality, not least because it shows where our moral instincts break down and become counterproductive.