[MUSIC] First of all, I just wanna say welcome. This is the Inamori International Center
for Ethics and Excellence, that you are currently seated in. And it's a fairly new
center here on campus. I'm the director,
my name is Shannon French. And I came here after a rather
different experience, 11 years teaching Ethics at
the US Naval Academy at Annapolis. So this is a change of pace for me. I am a philosopher by trade,
that's what my degree is in. Got my PhD in philosophy from
Brown University back in 19 [SOUND]. [LAUGH] And
since then focused my work on ethics, hence ending up here
in front of you today. Although my actual research field is
military ethics, that's my specialty, as you might have guessed. But I do dabble in many other fields, and did my original work
in just core meta ethics. So some of the stuff we're gonna talk
about today is old school, and hopefully you'll enjoy that and give you a bit of
a background if you haven't had it before. What we're gonna do here today is
hopefully going to be fairly interactive. This center was created with the idea
that it would inspire people to want to be ethical leaders. So if you think about that leadership
piece of it, we're trying to get you to not only think of yourself
as we all are, a work in progress. As a being that's trying to
intentionally develop its own character. But we also wanna think of you, have you
think of yourselves as leaders that can help influence the character and
actions of others around you. So as we go through today's workshop,
kind of wear those two hats in your mind, think of what does this have to
do with you as an ethical being. And what role could you
play as an ethical leader? The other piece that is part of our
mission or our vision at the center is that we believe strongly that ethical
discourse needs to be multi-voiced. We need to bring in many perspectives, and we need to bring in
people who are academics, practitioners, stakeholders
of various kinds. And bring them all to the table and
make sure that they all do participate and give those perspectives. So I mentioned this word once already, I'm
gonna say it again, interactive [LAUGH]. We will not be able to achieve
what I wanna achieve here today if you all are not responsive. So hopefully, when we have discussions
you can share your voice and let me know what you're thinking so
that we can all learn from each other. So I'm gonna go ahead and
start the ball rolling here. First slide is a helpful
one just to orient us. Ha, okay, [LAUGH] the schedule. We are going to try to keep to schedule,
because I did promise you food. [LAUGH] And
we don't want to keep you from that. So right now, we're doing the welcome and
introductions. But our first section is going
to be looking at some really traditional moral theories and
giving you that context. So that when we get later to more of the
applied side of effects you have something to apply. So we're gonna start with that. And we've got some videos thrown in
there to make things interesting. Then we have our first break. After that break,
we're going to be talking about character. And how you find the motivation,
the internal drive, to actually do the right thing once
you've identified what that is. Again, we'll have some more video clips. That we get that lunch
[LAUGH] that I promised you. And then after lunch is when we are going
to get a bit more specific about some research ethics issues and challenge
you to think with that role in mind. Another break, and then finally we are
going to break you into smaller groups and have you do some discussions on your
own on a series of seven case studies. So we'll have seven groups,
each looking at a particular case study. We have some discussions questions for
you. And then we're gonna close out by
bringing the group back together one last time to go over those discussion
groups and what you got out of them. And then just kind of have final Q&A. And see if there's any other loose
threads that we wanna try to tie together at the end. So that's the plan folks. We'll see if we can keep it rolling here. Oops, there. So [LAUGH] I thought I'd start
with the really big picture slide. This is, how do you get the outcome,
how do you get ethical behavior? Well, the important vision
that I want you to have when you think about ethics, is that
it's kind of a head and heart thing. You're not going to get all
the way to actual ethical behavior just by taking a class. I think probably you
would've guessed that. I can sit here and talk about Kant,
and Aristotle, and all those guys all day long. It's not necessarily going to
make anyone who listens to me be ethical even if they stay awake. But moral reasoning is
important nevertheless. You do have to have some
grounding in a theory. You do have to have some understanding of
the process that you're using to get to an ethical conclusion. Now, people talk about going with their
gut, and there's nothing wrong with that. But you wanna understand really
what's behind that gut reaction. If you have a response,
we like to kind of fancy it up and call it a moral intuition
instead of going with your gut. But if you have a moral intuition
that something is wrong, or something is necessary,
or morally obligatory, we want to try to get behind that,
kind of unpack it a little bit. And understand where is
that instinct coming from, can we draw a principal out of that? That you can apply in another case,
that might help you out when you're facing something that is more uncertain,
where your gut isn't sure which way to go. So, the moral theory piece, which is
where we're gonna start the day is hopefully somewhat familiar in, again, this instinctual sense but some
new material that you might find useful. Tools if you will for you. But then, as we move on after
our first break to character, this is really the piece
that you cannot do without. Because of you don't have
the character to do the right thing, again knowing the right
thing is really irrelevant. Think about all of the big scandals and
crises that people have slashed across the news
that have to do with ethical failures. And ask yourself, how many of them do
you think, come down to the people involved actually not knowing
that their actions were wrong? [LAUGH] What do you all think? Do you think that a lot of them come down
to they really didn't know that that was wrong? Enron, sex scandals,
all these things, I just, I didn't reason correctly,
I thought it was fine. What do y'all think? Not, really?
[LAUGH] So there's that character piece. It's pretty important. Knowing the right thing, from the wrong
thing, sometimes is very complicated. But a lot of times you can get that far,
but the tricky bit is, okay, I know what's right and wrong,
how do I get myself to do the right thing? So we have to look at the character piece. We have to talk about
what are the influences, what are the pressures that might
lead an otherwise good person to go off the rails and do something
that they know darn well is wrong? Or the other side of that is what are the
really dangerous, and I would use that word, dangerous character flaws that
people might have that you interact with? People you interact with might have these
flaws that might negatively influence you and lead you to do something that
is normally not in your character? So you're identifying both flaws that
might happen in otherwise good people and quite frankly trying to recognize bad
people and know how they might distract you from trying to stay closer to those
values on the upper corner there. So moral reasoning plus character
gets you ethical behavior. If you reason incorrectly,
you're gonna do the wrong thing. And if you don't have the character, you're not gonna have the strength
to do the right thing. So you need them both. Now, big picture stuff. Understanding ethics, if you wanna boil
it down, it really comes to a kind of a tug of war between principles and
consequences. Ethics is an ancient study, this goes
off certainly back to the ancient world. Every religion struggles with ethics,
how to lead a good life? What makes a life worth living? What is worth dying for? All the big questions. But in the in the end,
if you kind wahittle all that down, you do come to this fundamental question
and you'll find people on both sides of this issue is ethical
reasoning is finding out. What is the right thing to do? A matter of determining key principles,
rules if you will. And once you know what those rules are,
you stick to them and you're fine. Or is it about consequences, where you
really have to look at the context, you really have to look at
the outcomes that are gonna happen based on what you do, and make sure
that those outcomes are positive, that they are going to bring about
something that you can live with. So as you look at these two choices,
consequentialist moral reasoning, which has a very strong following,
it's most often thought of as a particular school
of philosophy called utilitarianism. And it's the idea that,
and this will I think, ring a lot of bells but,
greatest good for the greatest number. That's utilitarian reasoning. How do I determine what
the right thing to do is? I take into account all the stakeholders
any one affected by the action, and I determined, well,
if I have two choices, choice A, how does this affect those people, choice
B, how does that affect those people. And I choose the one that
brings about the greatest good. What possible concerns do any of you have
about consequentialism as described there. Yes, sir?
>> There's no way to know all the players. >> No way to know all the players, and
that is such an important point because it's true both in the moment,
and across time. In a particular moment, you may not be
able to discern how many people, and which people are going to truly
be affected by your action. Who are you truly
influencing by this action. But if you look across time,
it gets even worse. I can make a decision that in
the short term is wonderful for everyone immediately affected. But down the line is going to
cause a great deal of harm to maybe just larger groups of people. So that is a concern. Any other concerns? Yeah, right there? >> [INAUDIBLE]
>> Yeah, I wondered about that. I just blightly throw that out there. The greatest good for
the greatest number, [SOUND]. [LAUGH] Yeah. Well, yes, this is a huge
problem because defining good is back to the core of philosophical and
religious disputes since the dawn of time. So if people don't agree on what is good, then you either have someone who
is dictating that and saying, this is the greatest good for the greatest
number and if you don't agree shut up. [LAUGH] Or you have many different
voices debating what the good is and then they can't make any action at all. And so trying to mediate that and
come to at least some kind of moderate stance on what the good
might be is no small thing. I mentioned these two fellows up here,
Jeremy Bentham and John Stuart Mill, very important dead white guys
Jeremy Bentham tried to answer this what is good question and he thought
all the arrogance of philosophers. He thought that his answer was going to
be so obvious that everyone would say. Right.
We got it. Now, you know [LAUGH] what? Why were we ever struggling with that?. And his answer was that, what is good is what minimizes suffering, and
what is bad is what causes more suffering. Well, on the face of it,
that's not horrible. It's not a terrible definition as things
go, but as even he started to unpack it, he ran into some problems he ended up
creating this credibly complicated thing called utilitarian calculus cuz we
needed more calculus in the [LAUGH] world. And basically what he was trying to
understand is how do you map against one another different kinds
of suffering or pleasures and say which is better than the other. And so he would have things like, well,
you want to measure them on a scale of durability, how much joy or happiness
does this bring versus how much pain? And does it last over a period of time? You'd have intensity as an issue. You would have questions about whether
this good thing produced other good things, which he called fecundity, an old
fashioned word basically means fertility. All these different factors. You had seven or eight of these
different factors to try to balance, and try to come up with actual mathematic
type formulas to sort this out and what do you know, it was very unwieldy. And you could just imagine some poor
person trying to make an actual decision in life like what do I
do right here in this moment and having to get out a scratch pad and
[LAUGH] work on it. It was completely unwieldy, and it
avoided another key issue, which is also behind your point, that our fellow John
Stuart Mill raised when he came along. He likes Bentham's basic approach,
but he saw this problem, too. And John Stuart Mill basically ask
the question which kinda good is better. Is it better to be a great philosopher
like Socrates agonizing over some critical question of how human life should
be or what is truth or what is beauty? What is justice and be agonized,
be really wrestling with it and getting a headache and not feeling great. Is that better than being a pig
happily wallowing in a mud pool? Which is better, and
how do I ever judge that? And if I'm making ethical decisions for
a nation, or any large for an organization for any group of people,
do I maximize their cruder pleasures? It'll let them let them eat cake,
bread and circuses, or do I try to maximize their intellect and
say, okay, I know you all won't all like it but you're now gonna
listen to opera or something like that. How do I figure that out? Personally, I cannot stand opera but many people [LAUGH] would say
it's better than what I like. So which do you chose? This is one of the problems
with consequential reasoning, you can't agree on the good,
you can't move forward. However, this theory you will find,
and you'll find it more as we go through the day,
is still one that is very hard to shake. >> It does come back to a lot
of people's intuitions. When you do ask the kind of core
question of what is ethically right, really a lot of people do gravitate
back to some kind of fuzzy idea of well, it's got to be what brings
about the greatest good. And when you're making
the really hard decisions, especially the one's that involve life or
death, people very often come back to utilitarian
reasoning and they end up saying, well, it's some people are gonna be harmed,
but we got to go for the greater good. So keep these concerns in
the back of your mind, cuz we're not gonna get
rid of that anytime soon. Their hands over here,
did he hit your point or did you have anything you want to add? Okay, all right, so
let me give you the contrasting one and would dig a little deeper
into each of these. Principle-based moral reasoning,
now this is the idea that you wanna have a set of
really rock solid principles that do not waver based on the
consequences or the particular context. So if you say never lie,
you mean never, ever, ever lie. And if you say don't murder,
that means all the time, etc., etc. So examples of principle-based
moral reasoning are Kantian ethics, which is named for Immanuel Kant. Another dead white guy actually,
unfortunately. But anyway, he was a German
philosopher who was extraordinarily influential in trying to tease
out the idea of basing all of our ethics in rationality,
in logic and reason. And divine command theory, many different
religions use a principle-based form of moral reasoning that says basically,
do these certain things, regardless of the consequences. Do them though the heavens fall,
because that is what is asked of you. That is what is required of you. Focuses on moral rules and duties. Well, you knew I was gonna do this. Anybody have any concerns about that
basing all your ethics on that?. Yes, ma'am. >> [INAUDIBLE]
>> [LAUGH] Yes, okay, she said a sadist is a masochist
who follows the golden rule. Do unto others as you would
have them do unto you. That's disturbing. So if you have rules, first piece is
interpretation of the rule is going to alter how the actual ethical
behavior comes out at the other end and the other problem is the rules themselves. We're back at the same concern we
had with utilitarian reasoning. The rules themselves are going to
have this subjective element of what do we consider to be the correct rules,
what benefits everyone. Kant tried to get around that and
we'll come back to him, because he's a really
important figure in ethics. By trying to make it about again,
rationality and logic as opposed to any other
grounding that you might use. That might seem more culturally
subjective, but divine command theory is a powerful thing throughout
every corner of the world. We find people basing their
ethics on divine command and they're not gonna agree on
what those commands are. They are not gonna agree on what deities
exist and what they might want from us. So those are some serious problems. Let me formulate it this way for
you for a moment, cuz I want you to think about this
consequences versus principles thing. When you're deciding moral behavior,
do you want to say, in your own lives now, do you want to say
that I'm going to base it on the outcome? And I'm gonna take context into
consideration or do you wanna say that I'm gonna have certain rule and I'm gonna
stick by them, regardless of what happens? Let me kind of sell each one to you for
a second before you answer. The positive side of
consequentialist reasoning is that it does take
the context into consideration. So a famous letter that someone wrote
to Immanuel Kant challenging his approach said, what if you
are hiding Jews in your attic and a Nazi SS guard comes to your door and
knocks and asks you point blank,
are you hiding Jews in your attic? And according to Kant's logical reasoning, you're not supposed to ever lie and
actually Kant is hard core on that. He says,
you can't even do lies of omission. So you can't say something that hides the
truth by the clever way that you say it. So in that context and
the letter writer raised this with Kant, it seems that he's saying you
have to tell the guard, yes, I have Jews in my attic which
seems first of all, stupid. And second, of all of course, it seems
just on the face of it utterly immoral. You're leading them right to victims. You're leading other
people to their death. Kant's response was very interesting. He said, you cannot as a moral
being lie to the guard. He held firm on that, you cannot lie. He said, you can say nothing and you can
accept that they may kill you for that. But if you are truly a moral being,
you cannot lie. He was not kidding. [LAUGH] That's how hard he was on these
rules that if the rule is the rule and you want to be a mortal being, then you
follow it, regardless of the consequences. If you're concerned about the well-being
of others due to your actions, the best you can do is try to accept
the bad consequences on your head rather than someone elses. But we can imagine that
scenario playing out. You don't say anything,
the guard kills you and then searches your house and finds
the Jews in the attic and kills them too. So the outcomes are very bad. The letter writer was not
satisfied with consensus, but let's look at the other side. [LAUGH] Consequensalist reasoning. Back to the concerns that you raised. Suppose I say, well, I'm going to try to
base my decision-making on consequences, cuz I don't wanna be this rigid
conscience who says I never ever lie. There are gonna be times when lying
is helpful, where lying is right. There was a silly comedy with Jim Carrey
a while back now called Liar, Liar where he couldn't lie. His son made a wish that
he could never lie. And at one point, he pulls the boy aside
and he basically tries to tell him, you can't live like that. [LAUGH] Adults have to lie to
get through their lives and the example that he gives is
that when his wife was pregnant, she gained a lot of weight and
she would say, do I look fat? And he said, I had to lie. [LAUGH] I had to say, no,
honey, you're glowing. You look great. Well, we can imagine much more
serious consequences it play. The one that I gave, the conscience
an example with where you're hiding innocent people and
tried to protect them from evil. Well, if the consequences of lying or everyone dies, that it seems like
the right thing to do is to lie. You have to have that
bend in your principles. Consequentialist thinking also says
that well, as I'm looking forward and trying to balance things out,
it's not a perfect world. And if I stick to my principles, in
the end, I'm gonna do more harm than good. So I'm going to have to,
as messy and gooey as it is, I'm gonna have to struggle as tried
to struggle with some kind of formula in my head and some kind of attempt to
figure out what's the best in this case. It won't be perfect, but it's better
than just being rigid with the rules. Boy, arguments on both sides. Some deep concerns on both sides. Well, what do you all think? Just look at that question there,
what do you think? And how do you intend to live your life? Anyone, ma'am, you have your? >> [INAUDIBLE]
>> And speak up a little, sorry. [INAUDIBLE] in some cases
where [INAUDIBLE] yeah, develop a set, we would [INAUDIBLE] leader represented that [INAUDIBLE]
actually that people don't agree with, but easily [INAUDIBLE]. You have to stick by those rules,
and then [INAUDIBLE] so it would just start changing that and
people will [INAUDIBLE]. >> So that's really interesting,
because you actually blending, I know you couldn't hear her,
but she was saying that, suppose you were someone who
was the leader of a church. The pope for example, it seems that if you
are representing a system that has rules, you really do have to hold fast to those
rules regardless of the consequences, because people are looking
to you as a model for that. And of course, the interesting thing
is in your reasoning, it was kind of consequential reasoning, cuz your point
that was that if he failed to do that. The disillusionment and what it would
pull the rug out from other people, what that would mean for people would
be so bad that it's more important for him to hold to the rules regardless. So it's interesting,
see how we bounce back and forth between the rules and
the consequences. Rules and consequences,
this is the question of ethics, rules versus consequences. What do you all think, do you focus
on rules or consequences, yes? >> Both. >> Both, a-ha. >> I mean, even the Pope,
the church itself changes over time. [INAUDIBLE] things evolve. >> Yes, there's a beautiful concept,
which I hope is real and that is the idea of moral progress. [LAUGH] We like to think that humans
are figuring a few things out. That if you could imagine that there's a
basket somewhere where we're putting a few things in there that we're like,
we solved that one. And in science we can do this,
we can say, hey, we did figure out that
the earth is not flat. Okay, put that one aside,
gravity, yeah, we got it. Okay, well, we don't really understand it,
but we know it's there, so [LAUGH] a few of these
things we can put aside. But in ethics, are there things like that? Well, I think we hope so, I think we hope that there are some things
where we can say, hey, slavery, bad. Let's just put that in the basket, what
other, did you mention human trafficking? >> [LAUGH]
>> Yeah, no, exactly, [LAUGH] I’m gonna come
back right round to that. What other things? You might say, genocide,
bad, things like that. Now, what you’re pointing out is there’s
a gap though between a basic human or understanding that these things are bad
and us actually stopping doing them. You know I mean, can you say, in all
honestly without just being a politician, can you say that there is a general
international agreement that slavery is wrong? Yes, you can, there is international laws
against it, there's a general agreement that, guess what, that's bad,
does that mean nobody does it? No, [LAUGH]. We have human trafficking, sex
trafficking, all kinds of labor slavery, it still goes on in different forms. But people try to hide that it is slavery. Even that shows that they've
at least agreed that gee, if we call it that people
will know it's bad. So is this moral progress,
you gotta wanna see it. [LAUGH] But it is in some sense,
because at least now, and this is different than the past. You don't have people just proudly say,
hey, I just bought six slaves, nobody. They know at least that I might
get a little bit of censure for that if I said that right out. Moral progress, maybe a smidge,
genocide, same thing, people try to avoid getting
conflicts called genocide, right? Because there will be an international
response, is this ideal? No, we end up with these word
games where people are, well, we better not call it genocide, cuz then
we might actually have to respond to it. But at least, at least we figured out,
which believe it or not, we don't seem to have known for many, many
centuries that genocide is a bad thing. So that's something [LAUGH]
we know it's a bad thing, we apparently don't know
how to stop it yet. [LAUGH] We don't know how to get
people to understand that bad equal, it's that first slide I put up there. Moral reasoning has made some progress,
we have figured a few things out. The character piece is not there yet,
even character of nations, character of bodies of people. So we're working on that, but
if there is any moral progress, it does suggest that there needs to
be what you were suggesting here. A kind of a compromise
between these two options, we've got to figure it out,
folks, and it is up to us. We've gotta figure out how do we
balance principles and consequences. I think what we've already seen
just in this discussion is, if you just choose one or
the other, there's a worry there. If you just say, I am for these
principles and come hell or high water, I'm gonna do them and
I don't care about the consequences. Maybe that's not perfect [LAUGH] and there
will be things that will come out of that that will be pretty scary, particularly,
if you have bad principles to start with. On the flip side,
if you're only all about consequences and there are no rules that say, here is
a line that I won't cross no matter what, even if I'm under great pressure. Then you're gonna be
blown with the wind and you're gonna go this way and that way,
and no one can count on you or regard your actions as anything
that would be a guide for others. And certainly when we come back
to that theme of leadership. You need to be able to explain and
account for your actions, if you want others to take as a guide
to be influenced by what you do. So let's take a little bit deeper into
each of these theories and hopefully, as we do, we can see some
potential ways to reconcile them. And actually make moral
decisions that we can stand by. So consequentialists,
let's dig into this a little bit more. The greatest good for the greatest number
must take into consideration the interests of everyone affected by an action and
think long term. Now, this has many formulations,
I threw in the Star Trek One. Anybody in the room gets that, but the needs of the many outweigh
the needs of the few or the one. These are both positive, the greatest
good for the greatest number and the needs of the many,
those are positive formulations. We wanna watch out for, as we look for this reconciliation, this
one though, the ends justify the means. That's where it starts to get disturbing, that's where it starts to
go down a dangerous path. So if you say, I want to be considering
consequences, okay, good, but if you consider only consequences and you don't care what you do to get
the consequences you're shooting for. That's where I think you're gonna,
again, go off the rails and do some things that are very immoral. So I've already suggested one thing I
like about consequentialist reasoning is that is does consider the context. It does say hey, okay, this guy, the door
wants to kill the people in my attic. This is not a kind where I can just
hold the line and say, I never lie, because that will be me putting my
principles ahead of human life, and that doesn't seem right. So I have to see this as a conflict of
duty and when I have a conflict of duty, my principles kind of fail me and
I have to look at consequences. It doesn't mean I don't care about lying, it doesn't mean I'm throwing that
out the window for all time. Well, I lied to the Nazi at the door,
now, I'm gonna start lying to everybody. But it says that in some cases,
consequences matter. Something I like about consequentialist
reasoning, it forces you to consider that. Another thing I like is this
very first bullet here Actually making yourself think about who
might be affected by your actions, we don't [LAUGH] do enough of that. We don't, let's be honest. As we go through our day, do we really think if I do that
that's gonna influence this person? Or that's gonna make this harder for
them, or any little thing. I will give you an extremely trivial
example that happened to me yesterday, just yesterday. I'll also reveal my
very bad eating habits. But anyway [LAUGH] I was
going to get a pizza at Pizza Hut that happen to
have a drive through. I had called it in and
it was ready and waiting, and I was about to drive through and
pick it up. No big deal, right? So I drive up, there's another car in
front of me at the drive in window. I'm like, no big deal, okay, I'm waiting. And I'm waiting, and I'm waiting, and the time is ticking by,
[LAUGH] and I'm wondering. So eventually, I stick my head out the
window to the person in the car in front of me and I say, hey, what,
are they just not coming to the window? Did you try beeping? And he said, no, I'm waiting on my pizza. I said, you mean,
you just placed the order? And he said, yeah,
they said they'd be about 14 minutes. I said, I called mine in. Mine's ready to go. And he went, and he just sat there. >> [LAUGH]
>> [LAUGH] And I waited another minute, and I said could you drive around? Then I could just pick up
my pizza [LAUGH] and go. [LAUGH] And he finally got the idea,
and he did. And he drove around so
he could get behind me. And I drove up and
sure enough they had my pizzas. They handed them to me. I drive off. Is this a moral issue? Not really. But it's a wonderful little case
of he was sitting there and it wasn't even in his universe to
think that, hey if I sit here for 14 minutes while they actually
make my pizza [LAUGH], I am preventing other people
from picking anything else up. And there's a whole big parking lot or
a loop I could make. And I could come back around. It wasn't even in his radar to
think of that in that moment. And do I think this was a bad person? No, once he finally clued in,
he's like okay [LAUGH] and he moved on. But that's just an example of how
in a very small little trivial way, we don't have,
here's a term from the military, what they like to call situational
awareness, where you really notice. You take the time to notice how
your actions are affecting others. And most of us are guilty of that,
and in various degrees. That we go through our lives, and we've
got the blinkers on, and we don't notice. And I'll tell you this as
a basic ethical principle. That a lot of ethics is just
that taking the time to see things as potentially having
an effect on the lives of others. Just that perceptual shift
to see a situation as involving other people and not just
about you is a huge piece of ethics. So I like that about
consequentialist thinking. If you wanna say what are the pros of
it for me, that's something I like. It makes you stop and say,
if I have two choices, which of these two choices is not just better for me,
but is better for everyone affected? I like that about it. So what about you guys? What do you like about it? What worries you about it? And would you want other people
making decisions for you, or about your life using this
kind of reasoning or not? Yes, ma'am.
>> The use of the terms good and bad Like there's no set definition because what I
may think is good, you may think is bad. But whose to say that both of us are wrong
or both of us are right or either one or back and forth. >> So what I'm hearing from you,
I don't know if you could hear her but what she was saying is that it really
does bother her that we've got this vague concept of good floating around there, and
we may not agree on what's good or not. So if I'm making decisions, and I'm saying, well I'm taking your good
into account, we don't have the same idea of what your good is, then I'm guilty
of something called paternalism. Which comes from the root pay dirt means,
I'm acting like I'm some big, wise, father figure, and I can decide for
you what's good for you. And that's a violation of your free will,
that's a violation of your autonomy. So to push back on consequentialists,
I would say, you know, you've got a real concern
there if other people are making decisions about your life
based on consequences. And they're the ones choosing what
counts as a good consequence. There's a cure there for that and
I want you to think about that. There's a cure for
how you could move forward with that. >> Yes ma'am. >> I don't like that there's
no sanctity of a person. Like typically it's better for everyone
to experiment on me like a lab rat and have a cure for everyone? But I don't like that. >> [LAUGH] Yeah, the tricky part of
this quote, the needs of the many outweigh the needs of the few or
the one, As if you're the one. [LAUGH] Because then you sort of say,
wait hang on. [LAUGH] I realized that you've all
thought about this and you decide that the greatest good is to sacrifice me,
but I have something to say about that. [LAUGH] So yes, that's a very big concern
and again, I wanna really emphasize this. People use consequentialist
reasoning more than you think. [LAUGH] And so these worries
you're having, remember them. [LAUGH] You may need to be able to
articulate this to someone because it's a really simple default,
people go to it very easily. And particularly in large institutions, they'll just go well we got to figure out
what's best for the most people, da da da. So maybe these concerns, like hey, hold
on, have we talked about what best means? That's the cure I was talking about. How about dialog? Let's actually have a consensus
view of what good is. Let's take your views into
account before we move forward. That's part of the cure. But here's another piece of the cure. Let's let that person who's the one,
or people who are the one, the minorities in this argument. Let's let them be full players,
fully invested players in whatever we're gonna do next,
and maybe there will be a case where you do have to sacrifice a few to increase
the benefit for a larger number. But wouldn't it be ideal if you
made that a case where people could volunteer for that? Where you gave them something we
like to call informed consent, and gave them the opportunity? Now there's some worries about that. We worry about human nature and
we say, hm, if you did that no one
would ever volunteer. The consequences would never be achieved. But maybe we need to trust
each other a little more. And maybe, more importantly,
if we can't get informed consent, we can't get people to volunteer, may be made need to re-examine what
it is we're asking them to do. Maybe they're right, and they shouldn't
be sacrificed against their will. Yes.
>> sn't the way that modern society works kinda destroys people's ability
to examine the long that their actions? Like if I was shopping in the grocery
store but I feel like getting food are we doing the right thing like programs
to feed, talking about really. Really knowing where that food came from,
or how do people produce it, or what impact that would be having on that. >> A very good point. Let me kinda repeat her point for
those of you couldn't hear her. She's saying that, look,
modern living actually makes it harder Even if you wanna commit to this, it makes
it harder to be a true consequentalist reasoner because the consequences of
our actions are actively hidden from us by the consumer life that
we have available to us. So as an example that she was sharing
with us, if you go shopping and you're at a grocery store and you think well I
need these foods, I have a young daughter. So, I'm going to be buying her these
foods that I think are healthy and that all seems very noble and good. Okay, nutrition for my child great, but am I thinking about well, how did
these foods get to the grocery store? Was there justice, social justice. Violated in the way that
this food got here. Who am I harming by buying this
product versus that product? There's a lot that could
go on behind the scenes. And everything in our modern lives is
kinda constructed to be saying to us don't look there. Don't look at it, don't consider that. This is easy and fast, and
check me out driving through for a pizza. This is blinders again
of a different sort, and it's a lot conspiring against us
being thoughtful moral reasoners. Right here, and then back over here. >> [INAUDIBLE] our societies. We can' tnecessarily think of
the consequences that far away. I mean, we can only reason
within our sphere of influence, and then hope that other people
will also be doing the same. Otherwise, you're just going
to spend the rest of your life questioning whether everything you do
is affecting some person [INAUDIBLE]. >> Well, that's true, and his point is that it could be
actually just crippling you. You would just end up frozen
if you tried to say well, I can't make a decision unless
I've considered everyone. And you really can't. You don't have access to all that
information even if you want to have it. And if you took the time to get it you
wouldn't have time to do what you need to stay alive. It's just too much. Well, and so John Stuart Mill,
a guy I mentioned before, he did have that concern. And so,
he tried to push it off to governments and say that at least on a higher level we
need people to be thinking like this. We need people to be looking
at the consequences and taking all that into consideration. The trouble is, though,
I think we sometimes forget that governments are made up of people with
their same foibles and weaknesses. And so,
there's no guarantee that they're gonna be looking at it any better
than the rest of us. So i f we just say I'm gonna relax and
let various regulatory groups or things like that try to take these
things into consideration, and yeah, I'm gonna be responsible up to a point,
but there's a limit. We're still gonna do
things that harm others. Is that just moral tragedy? Because I'll tell you, there's a lot of
philosophers who want us to acknowledge the fact that we can't ever get
through life without doing harm. That life is messy, and
you can't be paralyzed by the fact that you're gonna do some
harm as you move forward. What you wanna do is just focus on
the things in your sphere of influence, and at least not make it worse. [LAUGH] Not make it worse
than it already is. Yes ma'am? >> I was gonna say, the majority
isn't always necessarily right. Because there are examples in history
where the majority were saying this doesn't necessarily work, and
in all honesty, it didn't. So everyone thought the world was flat,
except for Christopher Columbus, and he was right. >> Yeah. >> So the greatest good for
the greatest number. The greatest number might not
necessarily know what's right. It's what they think is right at the time. But the minority might actually
have a long term plan. >> Well, this is true, and there are so
many examples of this in history where the key bit of knowledge,
the key bit of social justice, the key bit of leadership came from the group
that was not part of that majority voice. And so, if we say that we're going
to sacrifice the few for the many, we're actually ultimately
sacrificing the many. [LAUGH] Because we're not
getting that moral progress. I mean, a classic example, and we're back to this tension between
principles and consequences, is when Martin Luther King was making his
arguments about justice and civil rights, and trying to make people see that this
was larger than any one individual. He appealed to principles. And people, some people, even people who
were not just outwardly resistant and hostile but
were really trying to engage the issue, came back at him with consequences and
said, hey, okay, you're right that in principle we
should have these rights more equal. But If you just ram that down on people's
throats, there's gonna be violence, there's gonna be disruption. Let it happen gradually,
let it take its time. It'll percolate through, be patient. And Dr King's point was that
you really can't be patient when you're talking about issues
back to fundamental principles. Because the harm that's going on is
going on right now to many individuals. And more importantly, you're harming those very principles that
in other context you say are foundational. That you care about them,
that they're the grounding of your nation. So if you keep letting those principles
be violated with the argument that pushing the issue right now is too messy
and the consequences are going to be too hard to handle, then you're again missing,
even if you are a consequentialist, the long term vision and the long term
harm that will come from letting those principles get slowly
eroded time after time. I know you have more thoughts on this, but I did want to get in a video clip before
we do come to our first break here. Now, this clip is from a film
with Hugh Grant and Gene Hackman. And you'll see in the film Gene Hackman
plays a doctor who is brilliant and has been working for
years on trying to cure paralysis. And he's very close to a cure. However, he is a very strict
consequentialist, as he sees it, and he has decided the fastest way to
get to that cure is to kidnap, back to your point,
to kidnap homeless people off the street to do unauthorized
human subject research on them. Hugh Grant is a doctor who has
uncovered this evil plot, if you will, and he's confronting the Gene Hackman
character about this. And then you're gonna hear
the argument that is made. And we're back to principles
versus consequences. Please work. Happy day, okay. >> I'm 68 years old. I don't have much time. Three years with a rat to get to a dog,
and after five years, if I'm lucky, maybe I can work on a chimp. We have to move faster than that. I'm doing medicine here
no one's ever dreamed of. This is baseline neurochemistry, Guy. >> You're killing people. >> People die every day. For what?
For nothing. Plane crash, train wreck. Bosnia. Pick your tragedy. Sniper in a restaurant,
15 dead, story at 11. What do we do? What do you do? You change the channel,
you move on to the next patient. You take care of the ones
you think you can save. Good doctors do the correct thing. Great doctors have the guts
to do the right thing. Your father had those guts. So do you. Two patients on either side of the room. One, a gold shield cop, the other,
a maniac that pulled a gun on a city bus. Who do you work on first? You knew, Guy. You knew.
If you could cure cancer by killing one person,
wouldn't you have to do that? Wouldn't that be the brave thing to do? One person and cancer's gone tomorrow? You thought you were paralyzed. What would you have done
to be able to walk again? Anything. You said it yourself. Anything. You were like that for, 24 hours. Ellen hasn't walked for 12 years. I can cure her. And everyone like her. The door is open. You can go out there and put a stop
to everything and it'll all be over. Or we can go upstairs and
change medicine forever. It's your call, guy. >> Okay, here's your question top
of the slide, what would you do? Yes ma'am. >> Send reserach to
the proper medical channels. >> All right, so your argument is that you
want to do the research, but it has gonna have to be slower because in this case you
can't sacrifice these unwilling patients. And he is actually, as the movie sets
it up as dramatically as possible, he is killing people. He is killing these homeless people that
he's kidnapped in the process of testing things on them. So you're saying,
I don't care if he's brilliant or not. Would you blow the whistle on him? Would you let him keep doing it or
you just wouldn't do it? >> Doctors publish their work so it's not like someone couldn't use
any of that in an ethical way. You just have to get volunteers. Like there are so many paralyzed
people that would be like, me, me. >> [LAUGH]
>> Test on me, I mean, there are ways to do
it without killing people. So the fact that he is killing
people just is unnecessary. >> What about, thought I saw another hand,
blue shirt, yes. >> Yeah, well. I would say I would just ask for
a volunteer to [INAUDIBLE]. >> What if nobody did volunteer? Would you, what I mean there have been
times in not so long ago in history when people have tested on prisoners,
put incentives, things like that. It's not a theoretical too much. Here in the back over here, yes. >> I was gonna ask how many
people would volunteer I mean, but you don't-
>> The prisoners on death row get that option sometimes. >> That's what I was thinking. >> But this is important, because if we're
truly talking about honoring their free will, if we're truly talking about their
autonomy, then you gotta tell them. That's what informed consent is. You have to tell them, I'm gonna try this
stuff on you, and it might kill you. It might kill you in agony. You up for that? And how many people would be? >> You're saying people will. >> Terminal patients who don't
know their options might agree to that sketchy cancer treatment, you know? And that's how a lot of the new
cancer treatments get tested. >> Mm-hm.
>> And some of them work. >> Yes ma'am
>> [INAUDIBLE] but you have while doing this. So can die in the streets or
just, you'll live in relative comfort and
if I die in five days then [INAUDIBLE]. >> Well, and a piece of that, she's
talking about the decision that a homeless person might be offered if you really did
give them the full picture would be, look right now you're suffering in this way,
here you would suffer too, but potentially for this greater good and we would treat
you with dignity while you're there. And we would give you the comforts
that you don't currently have. So there would be a way to do it. Think of those words, dignity. There would be a way to Invite people to
volunteer that would show respect for them which clearly this doctor isn't. He is using them merely
as a means to his ends. Here and then back over here. >> Especially when their obvious
rights are being violated, but like when he made the point of if you just
had to kill one person to cure cancer, well how many times has he
said just one more person? And where do you draw the line. Even if we have these people's consent,
I don't think people who really don't have that big a problem
in their life you have to kill. >> Now, this is a very good point, because
one of the many [LAUGH] dangers again with this kind of reasoning is it's very
close to just what we call rationalizing. And you can rationalize quite a lot. And you can say yourself well, I already
moved the line in order to do this. Well, that didn't accomplish my goal, so now I need to nudge
the line a little further. And that didn't do it, so
if the argument stands that I can kill one unwilling person to achieve
this good, well what's five people? What's 50 people,
you know you could keep going. To put in the most extreme case, you know
Hitler was a consequentialist thinker. He thought supposedly that
his actions were going to bring about some great utopia. Utopian thinking is dangerous. [LAUGH] Not only because people don't
agree on what utopia ought to look like, but also because you just start
rationalizing everything. You start saying,
the ends justify the means. Yes. >> I was just going to go back
to the point that was made about the homeless person might
be the volunteer amenities. But I have a problem with that,
because I think that's an immoral decision to
make because you're taking advantage of another person's,
their struggles in life. The person might not see it
as if they have any choice. They might be forced to do something
they might not wanna do because of their poor circumstances. >> Yeah, well, how many times have we
heard that prostitution is a voluntary thing, or child labor, or even various
kinds of indentured servitude that well, they're choosing that because
their alternatives are worse. Is that a true choice? Do they really have any other
choices presented to them? You also have to ask, and
I think this underlies your point, how did they get in that
circumstance in the first place? Were there other failures,
moral failures, of other people, or other systems around them that
led them to that consequence, because that's a pretty serious
accusation, and we have to look at that. You might say, well,
how do these people end up homeless. If we fail a portion of our society and
then turn around and say, well now that you're
suffering like that, do you mind if we do these experiments on
you, it's make your life a little better? Yeah, that's starts to
get pretty disturbing. Very good points you guys are raising. Yes. >> Just to have a different point of view,
what about the time frame? You just talked about Martin Luther King. He wasn't [INAUDIBLE] shove
down people's throats. >> Mm-hm, well, we're back to. Yeah, I'm glad you did that, cuz I thought
of that when he made the point about how long she's been waiting, and
how many other people are waiting to walk. You say well, we're gonna walk
through the proper channels. People are dying while you're
going through the proper channels. Maybe you can't wait,
maybe you rush, rush, rush. But in Martin Luther King's case,
he was acting on principles, he was saying that what's being
violated are these core principles. Here, it's more the principles
themselves would have to be eviscerated in order to get the consequence. But it shows the tension in
a really interesting light and how hard this stuff is. I had a student once in an ethics course,
on the first day say, I don't know why they're
making us take this course. Ethics is pretty straightforward. We all basically can figure this out. And then [LAUGH] by the fourth
day he actually came up to me and said, man this ethics stuff is hard. [LAUGH] So it's true, it is hard. Did I miss one over here? There was a hand, went away? Okay, well, let me move on just a little
bit, cuz we're almost set our first break. We have to keep going at
a pretty fast clip here. But I just wanna throw. Technically we are the first break, but
I'm gonna stall you one minute cuz I just wanna throw out a little bit more here,
About Kant, just so you have this,
cuz I think this is a very useful tool. I talked about giving you some tools
that might help you going forward. This is one I think you
might find valuable. I mentioned Immanuel Kant already. He is on the principle side. He is extreme on the principle side,
but he has given ethical reasoning a pretty wonderful gift in the form
of the categorical imperative. Because it really does speak to a lot
of the concerns that you all have been raising. And it's something you can
really easily keep in your mind. These are two formulations of
the categorical imperative, but they mean the same thing. It's just different ways of expressing it. I actually prefer the latter
because I find it easier to apply. And the second formulation there is
always treat humanity in yourself and in others never merely as a means,
but always at the same time as an end. Now what that means,
to dig down, is to say, when you make decisions involving
either yourself or other people, never use a human, a rational
being as a mere means to an end. I mean, that's what was troubling you,
that's what troubled others in this case, is the doctor wanted to use the homeless
people as just a means to an end. If you see someone as an end in
themselves, as having value, as being worthy of respect and dignity, then you come to this more sensitive
discussion that you all were raising. And again, it's still complicated,
but it's worth having. It's a better progress to say,
is there a way we can do these same things that we wanna do and still treat these
people as an ends in themselves, and still honor their dignity, and
still honor them and show them respect?. Now the answer may come back,
no, or it may come back, yes, but we need to radically change our
approach in order to get there. But I think you'll find that if you
keep that in the back of your head, and frankly, it's something that is useful
even as you're just dealing with yourself in your own life. Because we sometimes treat ourselves
like we're just a means to an end. And we forget to see ourselves
as a being possessed of dignity. We show respect for
one another by honoring our free will. By saying this is not a tool,
a hammer or a saw, this is someone who makes choices or
should drive their own destiny. So informed consent is
a violation of that. Or failing to give informed
consent is a violation of that. And when we act to bring the greatest
good for the greatest number, if we can do that in a way
that doesn't autonomy, maybe we're getting closer to
that blend that we wanted. That blends principles and consequences. That says, okay,
we're gonna care about consequences, we're gonna put this constraint on it. Move forward towards those
good consequences you want but only in such a way that you never
use anybody just as a means. You always make sure that
their free will is honored and that you're treating them with dignity. All right,
let's take ourselves a ten-minute break. I'll give you a little wiggle room, so if
you could be back in your chairs at 11:15, we'll be good to go. [SOUND]