(intro music) Hello! I'm Geoff Sayre-McCord. I teach philosophy at the University of
North Carolina at Chapel Hill. I'm gonna speak to you today
about the Prisoner's Dilemma. Consider the following situation. You and Isabella commit
a diamond heist. Days later, you're both arrested, and the police have enough to
charge and convict you of parole violation, for which you
will each get three years in prison. Though they have their suspicions, and the police recovered the
diamonds, they have no hard evidence that you two are the ones
who robbed the jewelry store. Yet the detective is no slouch. She decides to make
each of you an offer. If one of you, and not the other,
confesses and rats on the other, agreeing to turn state's evidence,
then that person will go free, while the other will serve
fifteen years for robbery. It both you confess and rat on each other, then while neither will
serve fifteen years, you will both serve ten years in prison. Assuming, for a minute, that you each only care about minimizing your time in prison, you would prefer that Isabella remain silent while you rat her
out, and thus go free. You were just about to tell the detective you will testify
against Isabella when you realize that she's
in the very same situation. If she reasons as you have, so turn
state's evidence against you, you and she will end up
serving ten years each, not getting off scot-free. So, you realize, you're better off and she's better off if you
both just remain silent, working together to foil the detective's
efforts to get a confession. Since Isabella is in
the same situation, you figure she must
realize this too, and so your plan to remain silent is set, until, that is, it occurs to you that
if Isabella is going to remain silent, then you can get off with no
jail time simply by ratting her out. Moreover, if this last though occurs to Isabella and she
decides to rat you out, you will still do better ratting her out, since instead of doing
fifteen years in prison, you'd have to be only ten. So no matter what Isabella does, you do better turning state's
evidence against her. And the same is true of her. As a result, you both conclude you
need to confess and rat on the other, with the predictable and sad result of you both serving ten years, instead of the three years you would have had, if
only you had together remained silent. But then some good luck strikes, and you and Isabella find ourselves
alone in a room together. Taking the opportunity, you talk and agree to stay silent, so as to serve only
three years, rather than ten. When you are then separated, you rest easy, thinking you have together been able to at least minimize your jail time. Until, that is, you realize
that if you turn Isabella in, you won't have to serve any time,
unless of course she turns you in too. But if she's going to break your
agreement and turn you in, you'd better turn her in to
avoid fifteen years in prison. You are stuck again, predictably, doing worse than you might have done. You both are. If only. If only what? Well, if you could count on
her to keep her word, you could then keep yours and end up
with a sentence of only three years. Or you could turn her in and go scot-free. But you can't count on her
to keep her agreement, at least if she realizes that you might
well not keep yours. So she needs to be confident
that you won't break yours. But then she will have a strong
incentive to break hers. And if you know that of her, you too will
have a strong incentive to break yours. That is the prisoner's dilemma. Understanding the underlying
structure this dilemma, it turns out, can shed light on
a broad range of phenomena. Consider, for instance, the so-called
"tragedy of the commons." The classic version of such a tragedy is
found in thinking about a public grazing area, a village commons, where all members
of a town are free to graze their sheep. If all restrain their use of the commons, enough will be available for
all, and herds can thrive. But as long as others are restraining
their uses the commons, each person in the village
can do even better, for themselves of for their family or for the charity to which they
will donate their earnings, by allowing their sheep
to graze a bit longer. But if everyone in the village does that,
then the herds will eventually fail. But if they're going to fail, then
it's better to graze longer now, so as to allow your sheep to live longer. the incentives are such that, unless
people can count on others not to graze the sheep too long,
each has a strong incentive to allow their own sheep to graze longer. But if people can count on others not to graze too long, then each has a
strong incentive to graze longer. The predictable result will be the
destruction of the commons. We can replace the village
commons in the story with, say, our oceans, and
replace the grazing sheep with fishing and we will have
a model of why, so often, communities are at risk of overfishing
and entirely wiping out their livelihood. While all do better if they
all restrain their fishing, each does better if she finishes more,
whatever other people were doing. Or switch from oceans and
fishing to our atmosphere and activities that generate pollution. If we're all better of engaging in some activities that generate some pollution, and each is better off
if he or she can do what will generate a bit more
pollution, assuming others do not, we will have a model,
again, of why pollution becomes such a problem. In each case, costs and benefits are arranged in such a way
that people have reason to cooperate with each other,
refusing to rat eachother out, restricting how long they let their sheep
graze, limiting the size of their catch, controlling how much they
pollute, but nonetheless seem to have stronger
reason to act otherwise, no matter how others act. Long ago, in Plato's Republic, Glaucon relied on such situations to explain the
emergence of the principles of justice. According to him, in a
world without justice, we regularly find ourselves
facing choices where we stand to gain by exercising
our power over others. But we also suffer, from others
exercising their power over us. Recognizing that we all would benefit if only there were rules in place that
in effect defined a protected zone, not to be infringed on by others,
the rules of justice emerge. Of course, simply having the rules is
not enough, since while we may suppose each person benefits from the restraint
of others, each also stands to gain from sometimes breaking the
rules in their own case. Since this may be true
of virtually everyone, the rules are liable not to provide
the protection hoped for, unless there's some
way to enforce them. Hobbes thought that the risk of people violating the rules was so
strong and so disastrous that we have overwhelming reason
to set up an absolute authority, who would have power
to enforce the rules and punish those who try to violate them. Others, for instance Hume, have thought less Draconian enforcement
mechanisms would do the trick, say, the refusal of others to cooperate
with those who violate the rules The mere fact that you might want
to work with Isabella again, or that others would refuse to work with you on discovering that you ratted
her out, might well provide effective reason for you to
stick to your agreement. But of course, if these are effective
reasons, they're effective reasons because they shift the costs and benefits away from just years in jail, in ways that mean you are no longer
in a prisoner's dilemma. Some have suggested that the fundamental problem has to do with
people being selfish. But this is a serious misunderstanding. The problem highlighted by
the prisoner's dilemma remains, however we think of
the costs and benefits at stake. The people facing choices may
be as generous as you like, and as long as there are situations
which fit the following structure, people will be facing a
prisoner's dilemma. Suppose that A is better,
by whatever measure, than B, and B is better by that measure than C, and finally C is better than D. If two people, or groups
of people or corporations, or countries, face a choice with
the following possible outcome, they will be facing a
prisoner's dilemma. If both cooperate, then
B is the result for both. If neither does the cooperative thing, then C is the result for both. If one does the cooperative
thing and the other does not, then D is the result for the former and A is the result for the latter. In such situations, each agent does
better, again by whatever measure, no matter what the other
does, by failing to cooperate, even though the predictable result is that they each do worse than
they would have done if only they had managed to cooperate. That's the prisoner's dilemma,
and we face it all the time. Subtitles by the Amara.org community
TL;DW: In this Wireless Philosophy video, Professor Geoffrey Sayre-McCord (UNC-Chapel Hill) explains the prisoner's dilemma. The prisoner's dilemma is a scenario where all parties making rational choices ensures a less desired result for each than if each actor had chosen individually less-preferred options.
Thanks for watching! If you like our videos, please subscribe to our YouTube channel!
-WiPhi
This game show "Golden Ball's" is my favourite example of the prisoner's dilemma : https://m.youtube.com/watch?v=S0qjK3TWZE8
Never trust a Sicilian when death is on the line!!!
Very interesting video. Nerdy as this sounds, I think a similar concept plays a big part in post apocalyptic movies/books. If a group of people pull together resources they will survive until the end where they will have to split up. One person can take control of the resources and survive a lot longer on their own because eventually they will all end up on their own. That person realizes that everyone else has the potential to do so as well. So does that person take control of the resources in fear of someone else doing it first, thus leaving them with nothing? Or does the person have faith and pull together resources? I Always found this idea interesting.
The video only barely touches on the solution though. We've found that being a dick is only the best option if you are only going to face the situation once. Over many repeated interactions, cooperation wins out. This is also a very popular model of how trust and altruism developed in humans (and some other social animals)
Edit: Wickedogg corrected me, what I wrote was a bit misleading and oversimplified. This is what I was referring to: https://plus.maths.org/content/mathematical-mysteries-survival-nicest
Probably the biggest thing to know about Prisoner's Dilemma is that it's a single play game. You only play it once and NEVER play again. That changes the results quite a bit.
It turns out the most everything about PD goes out the window if you are playing the game (with all the same costs and odds) more than once or "iterated". Then most of the optimize strategies change - the common "defect is optimal" strategy by Nash criteria is not longer valid, for example.
Instead a "tit-for-tat" strategy is better where you presume cooperation until the other party defects on you. Then you defect one time "to teach him/her" and return to cooperate. The worst-case scenario is you get into a loop of alternating between cooperate and defect but strictly this is only for 2-player games. PD in an iterated scenario is far, far worse for everyone involved.
So this explains why some of my flatmates steal from others...
You guys should play Zero Escape: Virtue's Last Reward on the 3DS or PS Vita if you're into this stuff. It's a visual novel puzzle game with a fantastic and complex story, and the Prisoner's Dilemma plays a huge part in it along with plenty of other philosophical and scientific concepts.
snitches get stitches, also on average 1 more year of jail time