PROFESSOR: OK. So I want to start out by
finishing off the discussion that we began last class about
ways of thinking about the perplexity that the trolley
case gives rise to. And you'll remember that the
perplexities that the trolley case gives rise to is that
there's an apparent asymmetry in our responses to the
bystander case and the fat man case, even though both of them
seem arguably to involve killing one in order
to save five. And we looked, last class, at
Judy Thomson's response, which says, look, there's no asymmetry
in the two cases, because when we reflect on the
additional hypothetical case where there's a third track on
which you, yourself, are standing, we come to recognize
that it's not morally acceptable to turn in Bystander,
just as it's not morally acceptable to
push in Fat Man. At the other extreme, we
looked at Josh Greene's response, which was that just as
it's morally acceptable to turn the trolley in Bystander,
it's morally acceptable to push the man in Fat Man. And to the extent that we're
getting differential responses in those cases, says Greene,
it's due to the fact that the emotional part of our brain
response mechanism is activated by the up close and
personal nature of the fat man case, and as a result, we give
an answer that he thinks remains morally unjustified. And what we started to think
about at the end of last lecture was a third possibility,
which lies somewhere in between the Thomson
and the Greene, though closer to the Greene. And that's Cass Sunstein's
argument that though our responses differ, and perhaps
differ in ways that will be impossible for us to change,
the cases are the same, deep down. And he's inclined, though not as
certain as Josh Greene is, to think that if we want the
cases to come together, what we ought to do is to
push the fat man. And you'll recall that
his argument there proceeded as follows. He suggested that in that
in non-moral cases, it's uncontroversial that we make
use of heuristics, and that those heuristics, though useful,
frequently lead us to errors, and then went on to
contend that just as this occurs in non-moral cases,
so too does it occur in moral cases. And we left at the end of last
class thinking about what goes on in Sunstein argument that
in moral cases people often use heuristics. And you will call that he gave
a couple of examples from Jonathan Haidt's work of cases
where people were expressing moral disapprobation toward
actions for which they could find no justification. So consensual incest between
siblings, cleaning your bathroom floor with the American
flag: People were inclined to find those morally
problematic, and to find them morally problematic even when,
if pushed, they were unable to articulate what moral rule
those things violated. And what Sunstein suggests in
the paper is that in general, we can look at the heuristics
and biases literature and see instance after instance where
the framing of a case affects our response to the case in ways
that they do in non-moral cases, in moral cases as well. So you'll recall back in the
third lecture, right when we were learning how to use our
clickers, which I should tell you, we're going to use a bit in
this lecture, so you should take out your clickers. When we were first starting to
learn our clickers, we were presented with the famous Asian
disease case, which is the case that runs as follows. A terrible disease has
struck 600 people in your town, right? So there's 600 people in your
town who are destined to die. You are the mayor, and two
courses of treatment are available, plan A or plan B. And I asked half of you to look
at the green side of the description, which says that
plan A is the one where 200 people will live, whereas plan
B is one where there's a one-third probability that 600
will live, and a two-thirds probability that no
one will live. And the other half of you looked
at the exact same plan, but described not in terms of
who will live, but in terms of who will die. So plan A says, 200 of the 600
people will live, which means 400 will die. And here plan A means 400 of the
600 people will die, which means 200 of the people
will live. Nonetheless, and this was our
very first clicker response, the attitudes that you had
towards the cases differed, whereas when it was presented
as the number of people who will live, 66% of you went with
plan A, and only 34% with plan B. When we inverted the
framing, the numbers came out exactly the opposite. So 66 of you favored plan A in
the green case, 64 of you favored plan A in
the blue case. But plan A and plan B are
mathematically identical. So perhaps something like this
is what's going on in the trolley cases. And this is not a real clicker
example, but imagine you were presented with the
following case. A terrible trolley is hurtling
down the tracks towards six people in your town. You are the mayor, and two
courses for the trolley are available for you, plan
A and plan B. And then I present you plan A,
one person will be spared, which means, of course, that
five people will die. Or plan B, that one person
will die, which means, of course, that five people
will be spared. And there's an inclination, I
think, to go with plan B in the blue case and plan A in
the A [correction: green] case. And this generalizes. Depending on who we're focusing
on in these moral dilemmas, we have different
responses to them. If we think about Josh Greene's
crying baby case, where you're locked in a
basement with 19 others and your crying baby, surrounded
by enemy soldiers who will kill you if you are found, the
dilemma that Greene presents subjects with is, should you
smother the baby, whose cries will call the soldiers with
certainty to your hiding location and cause them
to kill all 20 of you? So very much like the Jim and
the Indians case, but with an even more painful premise. If you focus your attention in
that case to the experience of putting your hand over the mouth
of your screaming child, it is virtually impossible to
judge that as the thing that is morally required. But if you redirect your
attention even a tiny bit towards the two year old next to
you, and the four year old next to her, and the old man
in the other corner of the room, all of whom will die if
you don't take this action towards the baby, your response
to the case shifts. And the shiftiness in the
direction of our attention is something that's going to
be endemic to all of these kinds of cases. To some extent, we're able only
to focus on part of the world at a time. And As a result of that, it's
incredibly difficult to hold in focus in a way that makes
them seem stable--these kinds of moral dilemmas. So Sunstein's suggestion is that
this phenomenon, whereby features that have got to
be morally irrelevant-- right? It can't be morally relevant to
what's the right thing to do in the trolley case whether
you frame it in terms of the number who will live or the
number who will die. At least, prima facie, that
doesn't seem like the kind of thing that could be relevant. You're making exactly the same
decision framed in two different ways. How could that be what
makes the difference? Sunstein's suggestion is that
the mechanism that underlies the phenomenon that I've just
described happens over and over and over again, not just
in hypothetical trolley problem-style cases, but all the
time in the kind of moral reasoning that we engage in as
citizens of a democracy, trying to make judgments about
distributions of resources, trying to make judgments about
what sorts of laws should be put in place to regulate
or incentivize certain kinds of behavior. So in each of the following four
domains, says Sunstein, we very often focus on
heuristics, that is, the surface features of the
phenomenon, rather than the target attributes, that is, the
thing that we ultimately care about. Remember I talked last class
about putting a skin on your phone so that it's easily
recognizable, that gives you heuristic access to which
phone is yours. But of course that decoration
on your phone is useful as a way of finding your phone only
in so far as it tracks the target attribute that you care
about, namely, finding the phone which has in it the phone
numbers that you care about having. And when targets and heuristics
come apart, we're in trouble. So, says Sunstein, when we're
thinking about risk regulation, that is, what do
we do with the fact that as human beings, lots and lots
of the stuff we do has the potential for causing harm, but
we don't want to spend our lives wrapped in large amount
of Styrofoam, moving very slowly through the world so as
not to bump into things. Given that we are willing to
take risks, how is it that our tendency to use heuristics
interacts with our regulation of them? In cases of punishment, and this
is the first topic that we'll turn to after break,
Sunstein thinks we use heuristics in ways that
cause us to behave in counterproductive ways in
punishing both individuals and aggregates. In our hesitation to make
certain kinds of choices in the area of reproductive
medicine, thinks Sunstein, we risk mistaking the heuristics
for the target. And in taking the act-omission
distinction so seriously, we risk mistaking heuristics
for targets. So we'll turn to the issue of
punishment right after break, and we'll turn to the issue of
act-omission in the later part of the lecture. What I want to do right now is
to run through three examples of risk regulation via
Sunstein's analysis. And the third of these-- I'm actually really curious, and
so I want to see how the clicker numbers come out. So Sunstein points out, and it
seems to me that he's exactly right, that people are more
likely to condemn a company when their behavior is described
in ways that involve certainty than in ways
that involve risk. So take company A, which
produces a product that 10 million people use, which
will kill 10 people. Of the 10 million people who
make use of this product, 10 of them will have a reaction
to it of a kind that will cause them to die. And the cost of eliminating that
risk entirely would be $100 million. There is a feeling, an
inclination, at least, on the part of many, to think that the
company ought to spend its money getting rid of that risk;
that it's unacceptable to produce a product when 10
people are going to die. By contrast, if you frame
the case in terms of probabilities, that 10 million
people use the product, that it produces a risk of death of
one per million, and the risk elimination is exactly as
costly, this is the sort of thing that we allow
all the time. Without this sort of risk
tolerance, there would be no technological innovation, and
most of the goods and resources that all of us have
come to take for granted would never have come to be. So Sunstein's contention here
is that though the target attributes are identical in the
two cases, in this case, 10 people are going to die,
and saving them would have cost $100 million, in this case,
10 people are going to die, and saving them would cost
$100 million, the target attributes are identical. In both cases, 10 people die,
and saving them would cost the amount specified. The heuristic attributes
differ. This one is framed in terms of
certainty, this one is framed in terms of risk. And we have a very good
heuristic that goes like this. If 10 people are going to
die from what you're doing, don't do it. And Sunstein's contention is
that the asymmetry in our response to these cases
is irrational. Indeed, if we lifted
this [second] one to a risk of two deaths
per million, and had this [first] one with a certainty of 10,
people would still be inclined to condemn the first choice,
even though in that case, the second choice is clearly
the worse one. So as a result of mistaking the
heuristic attributes for the target one, we make mistakes
in what sorts of behaviors we permit. Sunstein thinks that this is
what's going on in the case of emissions trading -- cap and
trade -- of which he was an early advocate. In the model of emissions
trading, polluters get given a license to pollute n units of
pollution into the air, and those licenses then get to be
traded on the market in such a way that, arguably, there's less
pollution at lower cost. Let's grant Sunstein the
economics there. Even so, there is resistance
to cap and trade. Because even if we're willing
to concede that the target attribute-- namely, that we've reduced
the amount of pollution-- is present, the heuristic
attribute-- "People are paying to pollute? You shouldn't be able to pay
your way out of serious wrongdoing!" -- strikes us as problematic. Now, it's an interesting
phenomenon that resistance to this sort of reasoning happens
depending on the context from both the right and the left. So there is resistance to
commoditization of things from the left, and there is
resistance from the right to certain other sorts of framing
that suggests that their responses in cases, for example,
of reproductive technologies like cloning, are
due, says Sunstein, to the heuristic, "don't play God." And when confronted with the
suggestion, you're just using a heuristic there, both sides
respond with hostility to the smarty-pants academic
analysis. In 1970s, it was common for
advocates of the buildup of nuclear arsenals to make appeal
to a notion called "mutually assured destruction"
that we'll talk about when we talk about the prisoners'
dilemma. The basic idea is that if both
sides have enough weapons to knock the other side out, then
neither will make use of them, because the deterrence function
is too great. There was resistance to that
analysis from the left, because it felt too clever. There is resistance to the
sort of analysis that Sunstein's posing here from both
sides, because it cuts against the idea that we are
introspectively transparent in such a way that our judgments
are indicative of the things that we care about. So the last example that I want
to give you from Sunstein is our poll. Sunstein hypothesizes-- and are your clickers working? Sunstein hypothesizes that we
are more uncomfortable being harmed by things which are meant
to protect us than being harmed by things which aren't
meant to protect us. And he suggests that there is
data showing that if people are given a choice between
two cars-- the first car is one where
there's a 2% chance if you're in an accident that you'll be
killed by the steering wheel. And the second is a car where
there's a 1% chance if you're in an accident, you'll be killed
by the steering wheel, but in addition, there's a 1/10
of 1% chance that the airbag will kill you. And the question is, which
car do you choose? The one where there's a 2%
chance that you'll be killed by the steering wheel, or the
one where there's a 1% chance that you'll be killed by the
steering wheel, but a 10th of a percent chance that you'll be
killed by the airbag, which was meant to protect you. And let's see how those
numbers come out. I have to say, I'm doing this
poll because my intuitions didn't line up with Sunstein's,
and I'm curious whether yours do. OK. Let's see how the numbers
came out. So 15% of you want to buy a car
A, and 85% of you want to buy car B. So 85% of you
are doing what is the statistically rational choice. But a good proportion of you
are willing to risk greater harm so as to avoid this feeling
of betrayal by that which is meant to protect. So Sunstein's suggestion, just
to sum up, is that in moral reasoning, frequently, we
substitute heuristic attributes for target ones. And to do so is a mistake. So what do the three responses
to the trolley problem that we've considered suggest? Well, what Thomson
says is this. She says, reconsidering our
intuitions in light of alternative cases, like the
alternative bystander case where you imagine yourself to
be one of the people on the track, reconsidering our
intuitions in light of alternative cases can lead to
shifts in our assessment of those cases. And those shifts in our
responses, she thinks, reveal something morally significant. We can learn from the
contemplation of those specific cases what it is that
morality demands of us. Greene and Sunstein, by
contrast, contend that our intuitive responses to cases
frequently track features that are morally irrelevant, and that
as a consequence, those features fail to reveal
something morally significant. The question is this. Is any of this a problem
for Mill and Kant? Let's look back to the very
opening pages of Mill's treatise on utilitarianism. He writes there, and I didn't
have you read this passage so there's no reason you should
know that he says it. "Though in science, the
particular truths precede the general theory, the contrary
might be expected to be the case with a practical art,
such as morals or legislation." So in science, we look at
particular instances. We discover we drop this object
and it falls with acceleration a, we drop this
object and we discover it falls with acceleration a, we
drop this object and we discover it falls with
acceleration a. And from that, we conclude that
the law governing fall of bodies is that they fall
with acceleration a. So "though in science,
particular truths precede the general theory, the contrary
might be expected to be the case with a practical art,
such as morals... A test of right and wrong must
be the means of ascertaining what is right or wrong... and not a consequence of already
having ascertained it." "The difficulty" (of
building a theory out of judgments) "the difficulty is
not avoided by recourse to" what is sometimes now called the
moral sense --"a natural faculty," says Mill, "that
discerns what's right or wrong in a particular case in hand,
as our other senses discern the sight or sound actually
present." So as if you can see whether a case is
morally wrong. "Rather," he says, "moral
reasoning, moral understanding, is a branch
of our reason, not of our sensitive faculty. The morality of an individual
action... is a question of the application
of the law to an individual case... As a result, whatever
steadfastness or constancy our moral belief has attained is due
to the tacit influence of this reflectively available
standard." So Mill is building theory
out of theory, not theory out of cases. Kant. "Worse service cannot be
rendered to morality than that an attempt to be made to derive
it from examples. For every example of morality
must itself first be charged according to principles of
morality in order to see whether it is fit to
serve as a model." We have here, in some
ways, embodied the dialogue of this course. To what extent is our capacity
for rational reflection the best way to get at answers to
questions that we care about? To what extent is our capacity
for emotional response, for sensation, for instinctive
judgment on the basis of presentation of particular
cases, indicative of answers to the questions
we care about? So that closes the discussion
of the trolley cases. And when I want to do in the
second half of lecture is run through two kinds of puzzles
which persist regardless of which of those attitudes
that we take. So the first is something that
I presented you with as promissory note in the very
first lecture, because this is one of the most fun papers that
we're reading all term. And this is Roy Sorensen's paper
with Boorse on ducking and sacrificing. And you'll remember, that was
the weekend that the senator from Arizona had been shot, so I
couldn't do it with bullets. So you'll remember that the
case I gave you is, you're standing in a line. You're the yellow guy. And a bear is rushing
towards you. And you jump out of the line,
and the bear eats the person behind you. Contrast that with the
case where you're standing in a line. You're still the yellow guy. A bear is rushing towards you,
and you reach behind you, pick up the guy, and put him
in front of you, and the bear eats him. The first of these is
the classic what's called ducking case. That is, you're in a situation
where there's a harm moving in your direction. You move out of the harm's way,
and the harm hits someone else instead. The second it's a classic
sacrificing case. There's a harm moving towards
you, and you make you use of another person as a shield. So to duck is to avoid harm,
thereby allowing it to fall on someone else. To sacrifice is to avoid harm
by bringing about that the harm comes to someone
else if you use that person as a shield. And this is analogous to the
act-omission distinction, one that we've already looked at,
but it's wholly within the realm of acts. Now what Sorensen and Boorse
bring out in their article is how resilient this phenomenon
is, regardless of how you mess around with the framing
of the case. So they give you the example
of the mall gunman. There's a bullet coming towards
you, and your choice is to leap aside or to pull
somebody in front of you as a way of avoiding the bullet. There's the speeding
truck case. You're in a row of cars. There's a truck coming up behind
you in such a way that it's going to crash into you. And you have one of two
things that you do. In the first, you switch lanes,
and the truck hits the car that was in front of you. In the second, you signal to
a car that's behind you to switch into your lane, and
the truck hits him. There's the terrorist case. You're on an airplane. Libyan terrorists-- quite timely to be speaking
about Libya-- Libyan terrorists, in this
example, come onto your airplane and threaten to
kill all Americans. You're a U.S. State Department
representative, an on your briefcase is a U.S. State
Department sticker, and the terrorists are coming down
the aisle, and they're about to shoot you. Two possibilities. One, you cover your sticker with
a Libyan airline sticker, so they skip you and go and
shoot the woman sitting next to you, the next one in line. The other, you switch briefcases
with the person next to you, and so they shoot
her instead of shooting you. Or the sinking boats case. You're in an ocean. You're trying to get your
signal, your boat is sinking, the guy next to you's
boat is sinking. You're trying to signal to an
airplane above you to come and pick you up. And you can do one
of two things. You can strengthen your
signal, right? Make your light really strong,
and then the airplane will come and rescue you. Or you can jam the signal of
the other guy, making your signal relatively stronger so
that the airline comes and picks you up. Sorensen gives case after
case about this. If beetles are eating your
roses, it's OK to put beetle repellent on your roses, which
will cause them to go over to your neighbor's house, but
it's not OK to put beetle attractant on his roses. We have this strange tendency,
over and over and over, to think that ducking is OK and
that shielding is not. Now the perplexity that Sorensen
and Boorse consider is that it seems like there's
no systematic way to account for these kinds of discrepancies
in intuition. So you might think, look. The problem with these cases is
that when you tie up your opponent's feet, when you're
trying to outrun the bear, or when you push him in front of
you in the shooting cases, you interfere with fair
competition. And that fair competition is
what matters in these sorts of circumstances. But of course, there are plenty
of these circumstances where the competition was
unfair to begin with. And nonetheless, it
seems problematic. Even if the guy whom you're
trying outrun the bear in front of is a much slower runner
than you, so that you were certain to win, it still
doesn't seem OK to tie his shoes together. The fairness of the competition
doesn't seem to be what's driving the intuition. So perhaps, they say, it's that
in each of the shoving cases, what you do is somehow
an included wrong. It's wrong to pick somebody
up and carry them in front of you. Whereas, it's OK just
to duck down so that something hits them. It's wrong to steal somebody's
briefcase. It's wrong to jam somebody's
signal. But, they point out, it seems
just as bad to put the person in front of you in a friendly
way by saying, "wouldn't you like to see the beautiful view?"
as it does to pick him up and put him in
front of you. It's just as problematic, they
suggest, to scare somebody into jumping off a cliff by
yelling, "E equals mc squared!" to surprise them, as
it is to cause them to jump off the cliff by yelling
a racial epithet. The included wrong doesn't seem
to be what's explaining our response. So, too, and I'll leave you to
read these responses on your own if you haven't had the
chance already, so, too does the act-omission distinction
or the doing-allowing distinction not seem sufficient
to do the work. So, too, does the idea that what
matters is if you were the locus of a causal chain,
the originator of some sequence of causality. So, too, does the doctrine of
double effect not seem to account for all of
these cases. So, too, does appeal to Kant's
notion of rights in contrast to utilities not seem to explain
all of these cases. So Sorensen and Boorse somewhat
reluctantly consider a conclusion of skepticism. Which is roughly, this is a
perplexing feature of our psychology. But we, having listened to the
first half of this lecture, have one more alternative
explanation. And I don't promise that it
will work in every case, though it seems pretty
promising. Which is that what's going on
in the ducking and shielding cases is the overapplication
of a heuristic. In general, it does seem like
moving out of the way of a harm is not a bad thing to do,
whereas putting somebody into the track of a harm is
a bad thing to do. So perhaps these first set of
puzzles can be explained by means of heuristics. In the last fifteen minutes of
lecture, I want to focus on a set of puzzles which,
I think, can't. And for these, you'll
need your clickers. So let's start with
four drivers. The first of them, Lucky Albert,
or Lucky Alert, does the following. He gets into his car. He has his car in perfect
condition. He pays attention
at every light. He drives in an extremely
safe way. And at the end of the day,
gets home from work. That's it. That's lucky alert. Question. When Lucky Alert drives home,
setting aside whether he has his mistress in his car with
him, setting aside whether he's bought a car that has a
high rate of emissions as opposed to buying a Prius, in
driving home, setting aside all the other things that Alert
might have done morally wrong, did he do something
morally blameworthy, driving home from work, having fully
fixed his car, and doing no harm to anyone along the way? So this is not a
trick question. So if you think Lucky Alert
did something morally blameworthy, setting aside all
the things that are morally blameworthy about driving
a car, push one. Whereas if you think he didn't
do anything morally blameworthy, push two. So what you're judging is, is
driving home from work, all things considered, if nothing
bad happens, a morally problematic thing to do? And let's hope-- OK. So there's always that 5%. Those anti-car crowds. You're the ones going to med
school and chopping up our poor healthy guy in
the waiting room. 95% of you think Lucky Alert
did nothing morally blameworthy. Let's meet Lucky Alert's twin
brother, Unlucky Alert. Here's what Unlucky Alert did. Exactly what Lucky Alert did. Except as he neared his house,
a child ran out in front of his car and he hit the child. OK? Unlucky Alert did exactly
what Lucky Alert did. Left work, checked his tires,
stayed alert the entire time, drove at safe and
proper speeds. But due to bad luck, on his
way home killed a child. Question. Did Unlucky Alert do something
morally blameworthy? If yes, push one. If no, push two. And I'm going to write down the
numbers on the first case, which were 5 and 95. OK. So let's see how the numbers
come out on this. Here, 81% of you think he didn't
do something morally blameworthy, but we're up from
5% to 19% on people who think he did do something morally
blameworthy. Let's turn to our third case. Here's Mr. Lucky Cell Phone. Here's what Mr. Lucky
Cell Phone does. He gets into his car and starts
driving home from work. And on his way home from work,
he talks on his cell phone, but you know what, nothing
else happens. And he gets home from work
having harmed no one. Question. Did Lucky Cell Phone do
something morally blameworthy in driving home from work
talking on his cell phone? And let's see how these
numbers come out. OK. So your verdict here. 78% of you think Lucky Cell
Phone did something-- you guys, I don't believe you. I mean, you're anticipating
the next case! All of you talk on your cell
phones when you drive all of the time, and you don't think of
yourself as doing something morally blameworthy! OK. These are not valid data. This has to do with where they
are embedded in this experiment. All right. So since you've already answered
question four, let me just ask it of you. Unlucky Cell Phone drives home
from work while talking on his cell phone. Child runs out in front
of his car, and-- OK. Question. During his drive home, did
Unlucky Cell Phone do something morally blameworthy? And let's see how the
numbers come out. All right. Let's see where Unlucky
Cell Phone's big red line comes out. OK. So now we've got a complete
shift from the original one, and in fact different from
our previous case. OK. So what these examples
demonstrate is a phenomenon known as moral luck. We have two people here, Lucky
Alert and Unlucky Alert, who do exactly the same thing, but
Unlucky Alert's actions caused the death of an innocent
victim. And whereas only 5% of you
think Lucky Alert did something wrong, 19% of you
think Unlucky Alert did something wrong. Here we have, in similar
fashion, somebody who in a very slight way has taken a
risk, which in this case had no bad consequences and in this
case had very severe bad consequences. And 92% of you condemn
Unlucky Cell Phone. The phenomenon that this
illustrates is a phenomenon known as moral luck. Cases where an agent is assigned
moral blame for an action or its consequences, even
though the agent didn't have full control over that
action or its consequences. Right? It's not the case that Unlucky
Alert or Unlucky Cell Phone wanted the child to run out
in front of his car. It's not the case that Unlucky
Alert or Unlucky Cell Phone could have done it anything
different at that moment. The child was in front
of the car, and the car hit the child. Moral luck is perplexing because
we seem to have two competing commitments when
we think about moral responsibility. On the one hand, we seem to
accept something which we might call the control
principle: That moral praise and blame shouldn't be assigned
in cases where the action or the consequences lie
beyond the agent's control. And I can see that many of you
subscribe to the control principle, because 81% of you
thought that the unlucky alert driver did nothing morally
wrong, even though he killed a child with his car. And the reason you're inclined
to think that he did nothing wrong in that case, I suspect,
is because your judgment in that case, as Mill said, is
regulated by a principle to which you tacitly subscribe. Namely, something like the
control principle. It's intuitively plausible, says
Nagel, that people can't see morally assessed for what's
not their fault, or for what's due to factors beyond
their control. If you bump into me, and I trip,
and I accidentally fall on the red button that causes
the nuclear war to start all over the planet, it's
not my fault. I mean, it's really a terribly
bad thing that the planet is destroyed, but I just tripped. By contrast, and directly in
competition with the control principle, it seems, as the
moral luck principle states, that in some cases, moral praise
and blame shouldn't be assigned even where the action
or consequences lie beyond the agent's control. The difference in your responses
between the lucky and the unlucky cases indicate
the degree to which you tacitly subscribe to that. So you went from 95% of no
blameworthiness to 81%. So 15% of you shifted your view
as a result of something beyond his control. In the cell phone case, again,
roughly 15% of you shifted your view. The problem is that both
of these principles are incredibly difficult
to let go of. The control principle
relies on the following kind of reasoning. In general, we have a pretty
good sense of what kind of factors increase the blame- or
praiseworthiness of an action. In general, if an act is
voluntary, that is, if you've done it not out of coercion
and not out of mistake-- right? If you specifically chose to
perform the action that you performed-- then you get more praise for
doing it if it was a good action, and more blame
for doing it if it was a bad action. Likewise, if you had full
information, if you were aware of its likely consequences, you
knew that that was water, or you knew that that was
cyanide that you were giving the person to drink, it
increases the degree of praise or blameworthiness. And these are pretty robust
responses that fall out not merely of our analysis of cases,
but also out of our understanding of the principles
that seem to underlie moral responsibility. Correspondingly, it seems like
the absence of those features decreases moral blameworthiness
or praiseworthiness. If you do something under
coercion, if you do something accidentally, it's less
your responsibility. And you do something out of
lack of information, if I, fully thinking that I'm giving
you something totally healthy, end up giving you something
that harms you, we tend to think that the degree of
blameworthiness is mitigated. The control principle simply
says that without a difference in these factors, how could
there be a difference in blame- or praiseworthiness? If we hold these factors
constant, we must be in a situation where there's
no difference in moral responsibility. By contrast, the moral
luck principle is also really forceful. It seems undeniable that there
are cases where we assess moral praise and blame in
the absence of control. The driver case was one. One of the cell phone users
hits the child, the other one doesn't. The first is morally
blameworthy. I leave the stove on in my
house, or your house. I go to visit you, and I leave
the stove on in your house. I go out for the day. When I'm unlucky, it causes
your house to burn down. When I'm lucky, it doesn't. It seems, even if you think both
were bad things to do, a much worse thing to leave on the
stove and burn down your house than to leave on the
stove, simpliciter. Nagel gives the example of
leaving the bath running with the baby in it. An irresponsible thing to do,
but immeasurably more problematic when the baby
drowns as a result. Or the case where you and I
have similar characters. I stay in Germany, you don't. It's the 1930s. I become a Nazi, you live your
life in a way that makes no moral demands of you. So there's three kinds of
responses that we can give to moral luck cases. We can give a rationalist
response. We can say, luck simply can't
play a role in moral evaluation. And we can either take the
extreme that, one might think, a pure Kantian take, that all
the agent is responsible for is his will, and those
things over which he has full control. Or you can take what might be
an extreme Millian version. That the agent is responsible
for all the consequences of his action, and that the
attitude makes no difference. You can take an irrationalist
attitude towards this. You can say that luck can play
some role in moral evaluation. Or, though I think this is
ultimately difficult to maintain, you can say that as
a matter of fact, we never know how responsible somebody is
for an action until we see what its consequences are. That when I hypothesize that
these cases were identical, I was idealizing in an
illegitimate way. Now, it seems like that third
response might work for the classic cases of moral luck
which I've been describing, cases which we would call
resultant luck, where there's luck in the outcome
of the action. I perform an action, and it
happens to go awry in a way that I didn't expect. That's one class of cases
of moral luck. But it's harder to see how we
can use that explanation for some of the deep and
profound instances. So take constitutive luck. Some of you were born with genes
that make it easier for you to behave in altruistic
ways, and some of you weren't. Some of you were raised in
families that were supportive of certain kinds of
moral outlook, and some of you weren't. Is your character that resulted
from those features something for which you are
responsible, and if it's not, how is it something with respect
to which moral praise and blame can be assessed? Take circumstantial luck, which
Jonathan Shay discussed in our Achilles in Vietnam -- luck regarding the agent's
surroundings. Sometimes the circumstances
you're in either create or reveal otherwise hidden features
of your character. Does that mean, since they are
in part a matter of luck, that you are not thereby morally
responsible for what you did? Finally, if we start thinking
about our actions from the perspective of free will, it
becomes hard to carve out any space in which we're responsible
for what we do. It's a general fact about the
world that actions and consequences are in general
determined partly by features outside, or at least outside
the control of the agent. So we start thinking about why
it is that we respond that way in Trolley, and it turns out
it's because the emotional part of our brain
is lighting up. But why is that happening? Well, that's happening because
of blood flow, happening in a certain way in our brain. And why is that happening? Well, the blood is flowing in a
certain way because of what certain kinds of molecules
are doing. And as we think this through,
the area of genuine agency, says Nagel, seems to shrink
to an extensionless point. So I leave you for March break
with the following perplexing non-solution to a really
profound moral problem. Nagel suggests that the
problem of luck has no solution because something
in the idea of conceiving ourselves as agents is
incompatible with the undeniable fact that
actions are event, and people are things. "As the external determinants
of what someone has done are gradually exposed in their
effects on consequences, character, and choice itself, it
becomes gradually clear to us that actions are indeed
events, and that people are indeed things. As a result of this, nothing
remains which can be ascribed to the responsible self, and
we're left with nothing but a portion of the larger sequence
of events which can be deplored or celebrated, but
not praised or blamed." Nonetheless, giving up the
language of praise and blame is to remove from our conceptual
repertoire what is perhaps the most important tool
that we have. And coming to a stable perspective
on these matters seems enormously difficult. So I'll see you all at
the end of vacation.