(upbeat music) - Good afternoon. From the script, I am to
say I am William Lester, professor of chemistry and chair of the Hitchcock Committee. We're pleased, along with
the Graduate Council, to present Professor Daniel Kahneman, this year's speaker in the Charles M. and Martha
Hitchcock Lecture Series. As a condition of this bequest, we're obligated and happy to tell you how the endowment came to UC Berkeley. It's a story that
exemplifies the many ways this campus is linked to
the history of California and the Bay Area. Dr. Charles Hitchcock, a
physician for the Army, came to San Francisco
during the gold rush, where he opened a
thriving private practice. In 1885, Charles
established a professorship here at Berkeley as an expression of his long-held interest in education. His daughter, Lillie Hitchcock Coit, still treasured in San Francisco for her colorful personality, as well as her generosity, greatly expanded her
father's original gift to establish a professorship
at UC Berkeley, making it possible for us to
present a series of lectures. The Hitchcock Fund has become one of the most cherished endowments at the University of California, recognizing the highest distinction of scholarly thought and achievement. Thank You Lillie and Charles, and now, a few words about
Professor Daniel Kahneman. Daniel Kahneman is an internationally renowned psychologist whose work spans cognitive psychology, behavioral economics, and
the science of well-being. In recognition of his groundbreaking work on human judgment and decision-making, Kahneman received the 2002
Nobel Prize in Economics, a field that increasingly
bases economic models upon psychological models
of information processing. Kahneman's award-winning research show that many human decisions, especially those made in
a state of uncertainty, depart from the principle of probability. With his longtime
collaborator, Amos Tversky, Kahneman laid the foundations for the new field of behavioral economics. Kahneman received his BA in 1954 from the Hebrew University in Jerusalem, majoring in psychology and
minoring in mathematics. He earned his PhD in psychology from the University of
California Berkeley in 1961, and returned to the Hebrew University as a professor of psychology during the period 1961 to '70. Kahneman has also served as
a professor of psychology at the University of British
Columbia, 1978 to '86, and the University of California
Berkeley from '86 to '94. He has taught at Princeton
University since 1993, where he currently serves as a Eugene Higgins Professor of Psychology. He is also professor of
psychology and public affairs at the Woodrow Wilson School of Public and International Affairs. Kahneman is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, the Econometrics Society,
and other elective societies. He has received numerous awards, including the Distinguished
Scientific Contribution Award of the American Psychological Association with Amos Tversky, 1982, the Warren Medal of the Society of
Experimental Psychologists with Amos Tversky, 1995, the Hill Guard Award for lifetime contributions
to general psychology, 1995, and the Grabbe Mayer Prize in Psychology with Amos Tversky, 2002, and holds honorary degrees
from numerous universities, including Harvard University, the University of
Pennsylvania, and the Sorbonne. Please join me in welcoming
Professor Daniel Kahneman. (audience applauding) - It is a pleasure and an honor to be here as a Hitchcock Professor. It's also quite a moving occasion for me, because the I House was
the very first place that I visited on this campus 49 years ago, almost of the day, when I came to begin my graduate work here at the University of California, so it's nice to be here
in a different context. Now, the topic I will talk about today is one of, on which I have worked for many of these long years. And let me begin by saying, that I've always used
science as a conversation which can be more or less friendly, as when when friends try to
understand the same thing, or slightly less friendly debates, some of them even nasty ones. I've been by and large pretty fortunate. I've been engaged in quite a few debates, but most of them have been civil. Much of my career has been engaged in two of these debates,
both of them pretty short, mostly quite civilized. Both started a long time ago, when Amos Tversky and I began our study of systematic errors of
judgment, and that was in 1969. And that is Amos Tversky. He died in 1996. The first debates that we
conducted were with economists, because of the possible implications of what we were studying, which were systematic errors and biases and judgment in decision-making, implications that these
errors and biases could have for the rational agent model. There's been a second debate, and that's been with psychologists. Many psychologists who have deplored their emphasis on mistakes, and have claimed that the picture that we drew of human
cognition is distorted and much more negative and pessimistic than it ought to be. In particular, there is at
least the appearance of a debate about the quality of expert intuition. Now this is a hot topic these days, and one of the major bestsellers
of the last few years, Blink by Malcolm Gladwell, is basically dedicated to the study or to the description of sort of the marvels of expert intuition, and for a number of
years have been engaged in a sort of malgy
adversarial collaboration with someone we may consider
now a friend, Gary Klein, who started from really the other side. He has argued that we
should generally trust the intuitions of experts, and whereas you know I have argued that we should be quite careful in trusting the intuitions of experts, and we have been trying
jointly to figure out what are the boundary? That is when could we trust the experts and when shouldn't we trust the experts? Now, let me begin by a
couple of Gary Klein's most beautiful examples. He has one that is
particularly compelling, and which had a big influence, I think, in his thinking and his career, and that example is of a captain of a firefighter, firefighting company, and he is there on a roof. There is a sort of a fire. He doesn't quite understand
what the situation is, and he is there with several firefighters, and all of a sudden he almost
hears himself shouting, let's get the hell out of here, and they do, and they barely have
time to get off the roof and the house explodes. And when he tries to figure out what it was that happened to him, there was a cue, and the cue were that his feet were warm. His feet were warm and an
inference had been made, which he was really not
conscious of making, that if his feet were warm, there was fire directly underneath them, and that is, so I'm told, an
extremely dangerous situation. So, that is an expert intuition. He had another example which
I think is quite interesting, and this is of a nurse in the cardiac ward who shows up, comes home,
her father-in-law is there. She takes one look at him and says, we go to the hospital right now. And he says why, I feel fine. She says we go to the hospital, and they go and they're just in time. He is about to have a major heart attack. And here again, actually it turns out, she did not know what it
was that had worried her. It took some investigation to find out what the cues had been. It turns out that before a heart attack, when arteries are obstructed, there is a change in the pattern in the distribution of blood in the face, and she had detected that pattern without being aware of it. Now, that you can tell
that these experts do this, this is something that you can teach. So, those stories, and
there are hundreds of them, make you really marvel at the
powers of human intuition. On the other hand, I have been worried about expert intuition for about 50 years, and that goes back to my
days in the Israeli Army, just before I came to graduate school. I spent part of my service as a psychologist in the Israeli Army, interviewing people, and observing candidates
for officer training, in what was called
leaderless group situations. Those were you know, you
take a bunch of eight people, and you take them to the obstacle course, and there is a wall and a telephone pole, and they are in groups of eight, and you tell them pick up the pole. And now your task is, say you want to get up the
other side of the wall, all of you, without the pole touching
either the ground or the wall, and without any of you touching the wall. Can be done, but it's not easy. Now, when you watch, when
you watch people do this, as I did for a while, the striking subjective
impression is that, you know exactly who
is going to be a leader and who is not, because you see people's
character really revealed, or so it feels when you
watch the situation. You see leaders and you see wimps and you see, you know, people
with foolish suggestions, and you see the second-in-command, the person who helps but
doesn't really initiate. You have a feeling that you see the truth about these individuals, overwhelming impression of that you can actually understand
who is going to be a leader and who is not. But then, it turned out that
statistics were being kept, so we would predict, you know, people's scores in
officer training school, and occasionally, we would
get statistics on our validity and how well we were doing in predicting, and we weren't doing well at all. In fact, we couldn't do it. Basically, we knew nothing
on the basis of this. And the striking thing is that, you know, it's the Army, so you are told that you know nothing, statistics are no good, but the next day, they bring
another batch of people and you take them to the obstacle course, and when you take them
to the obstacle course, once again, you see their character. You see the truth revealed about them, and I coined a phrase for this. This was I think the
first cognitive illusion that I discovered, and I call that the illusion of validity, because it's like a
powerful visual illusion that you have that sense
that you can do something that in fact, you cannot do. Now, it turns out that when I was doing my work
in the Israeli Army, just that was a year or two after one of the great important classics in the history of psychology
work by Paul Meehl, who had done a seminal study comparing, looking at all the previous studies that had compared the
performance of experts, or so-called experts, psychologists, clinical
psychologists in various staffs, predicting various criteria to very simple linear
combinations of variables. So, you have a clinical psychologist, he is looking at a lot of information. The subset of that information is used by a statistical model. And now you compare, how well do the intuitions of people do compared to how well the
statistical model does? And the statistical model is based on part of the information. It's applied in a very restricted way. Those are simple linear models, and the stunning result really,
another target was stunning, of Meehl's study were that I think in all 14
studies that he looked at in that little book, in all 14, the linear equations beat
the experts hands down. There was just, people
think they can do it. They have their illusion of validity, but a very simple linear combination of variables does better. Now, by now 50 years later,
I think at last count, there was something like
a 160 or 180 studies along the same lines, and basically the conclusion is the same. When you compare people to very simple combinations of variables, the combination wins
just about every time. It's hard to find any exception. I mean, there's some of the
results of this research were really downright humiliating for people who try to
forecast complicated events, like whether marriages are
going to hold or not to hold, whether people will violate parole or not, and you compare their results
to results of equations. You can construct a statistical model that will predict what the
judge himself or herself does, you have a judge making prediction, you have a statistical model
that looks at how the judge is making these predictions. And now, you put the statistical model, and compare it to the judge
in the next batch of cases, and the statistical model of the judge does better than the judge.
(audience laughing) Now the reason by the way,
it's not a big mystery, people are very clever and too clever, so when you show them
the information twice on the same case on two
different occasions, they don't make the same judgement. This is the kind of mistake
that no formula does. You present the same case
to the formula twice, it comes out with the same output. That difference in noise
or in unreliability is enough to cause people to be inferior to very poor combinations of variables. Now recently, there have
been many of these studies. So you know, you have Blink on one side, and you know the marvels of intuition, but you also have, and that too, was actually in a recent article by
Malcolm Gladwell, I think. You have successes of
formulas in assessing the price that would be fetched by paintings in an auction. You have great successes
of very simple formulas that use the weather in the Bordeaux area to predict the prices of
wines 10 or 15 years later. It turns out that the combination
of three or four variables does just about as well, if not better, than the best experts and wine tasters. So, we have that evidence that experts are not terribly good. There is a lot of study of people who pick stocks in the stock market, and some of these people
are extremely confident in the quality of their judgment, and they put a lot of money on the line, mostly clients', but
sometimes also their own. And when you see what they
do in predicting the future, it turns out that mostly
they're predicting the past. That is, they are strongly
affected by recent trends, and they extrapolating trends. They don't know they're doing it and, but that accounts for a
great deal of the variance. Perhaps most remarkably in recent years, remarkable of the studies in recent years, is one by Phil Tetlock, my friend and Berkeley colleague, who studied the forecast
of political experts. And you know, people who do strategy, and I think many of them were CIA analysts making medium and long term predictions about various political
events in different countries. So, well they're, you know, what will the economy look like? Will it improve, or not improve? Will there be a war, will there be a coup? And people making predictions, assigning probabilities to these events, and these are experts. And now, these are experts in on which we count for their expertise, and it's very striking that by
and large, they can't do it. They cannot predict medium, and medium and long term developments in the political and strategic arena. It's not that they can't do anything, but they cannot predict it better than the average reader
of the New York Times. So you know, we don't have much expertise as readers of the New York Times, and the CIA experts, when
they do those kinds of tasks, not when they're
explaining the recent past, or maybe predicting that the near future, but when they do medium and long term, they're not better than the rest of us, although there are a lot more
expert about the systems. So, the bottom line is that
we have a lot of evidence, both for the models of expert intuition, and for the weaknesses
of expert intuition. And the question that arises
is you know, what distinguish, what distinguishes those
cases where intuition works? And so you know, I mentioned
firefighters and nurses, and of course, chess players. So, if you have a chess player,
you know with an expert, he can walk by a chess situation, and just say white mates in
three without breaking stride. So they, chess masters can see situations in ways that we cannot. Possibly, the most astonishing feats, I think of that kind
of intuition and flair, are to be seen among
professional athletes. That is hockey players,
basketball players, their sense of what is
happening around them, their ability to predict
where people will be two or three critical seconds from now, as as a function of that,
adjusts what they do. So, what do all these people
have, that CIA experts, and psychologists, and stock pickers lack? Now, to understand this, I think we must examine the
psychology of intuition, and that is what I'll try to do with the remainder of my time. Okay, let me begin by describing
two kinds of thinking. This is one way that
thoughts come to mind. And now, let me show you another way that thought come to mind. Now, let's compare what
happened in those two cases. What's happening here? Well, the product of 17 times 24, you could get to it you know, unless you spent a lot of your youth, misspent a lot of your youth
with multiplication table. That's something that you had to generate according to a rule. So, and this is a slow effortful process. Detecting that this woman is angry, although it's something that we would call an intuition actually, but that impression simply comes to mind. She looks angry, just
as she looks dark here. And, we have predictions about what her voice would
sound like if she speaks. We may have predictions
about what she's going to say when she speaks, certainly if we know
who she is talking to. Now, as you can see in this example, I'll call this intuitive thinking. It's a very primitive example. It's a very simple example
of intuitive thinking, but it is an example, and it feels like something
that happens to us. It doesn't feel like something we do. So, the experience of intuitive thinking is very much like the
experience of seeing, of perceiving the woman, and indeed, it is very hard to tell the, to find a line that separates seeing from interpreting rather
complicated things, such as the mood of an individual. Now, a lot of psychological theorizing has gone into describing
those two ways of thinking, and as one of those simplifications that theorists sometimes engage in, we now often speak about
two systems of thought. There are two systems or two processes. It's a simplification, but it's I think a very
useful simplification. And, we talk of two modes of thinking or two families of thought processes. Sometimes, and I'll slip into that, talk about intuition versus reasoning, but I prefer the rather
neutral label system one, which is the intuitive system, and system two which is the other system. And system one is the one that detected that this lady was angry, and it clearly operates extremely quickly, although in some cases
intuitions can develop slowly, but the characteristic of
system one is very high speed. System two is slow. The other aspect of it is that, you didn't really have a choice in seeing that woman as angry. You had a choice about whether or not to compute 17 times 24. I think very few of you actually did. So you know, you just didn't do it and
then nothing happened, but you didn't have to do anything in order to see the woman as angry. So, one mechanism is automatic, and the other is voluntary. It happens if you want it to happen. You can start and end without finishing. You can stop in the middle, so it's controlled as against automatic. Now, there are many computations, mental computations that
have this character, that they are performed automatically. Amos Tversky and I call
them natural assessments. And, you know when you look around you, you make a lot of these
natural assessments. You perceive objects around
you, you identify them, you know how far they are from you, you know how long they
are, you know their color. You don't have to think about any of this. These are all computations, and some of them are quite complex, but these are computations
that the mind performs and just delivers the
results to our consciousness without our being aware that we're engaging in any activity. But, there are natural
assessments which are conceptual. So you know, if I mention, it's an old example as you'll see. So, if I mentioned that
Woody Allen parents actually wanted him to be a dentist. Now, a few years ago, this would have brought
some laughter actually. Woody Allen is no longer quite as important a figure as he was, but whatever Woody Allen is, when you hear that sentence if you know, you know if you know about Woody Allen, you know that he's not going
to be a very good dentist. (students laughing)
And that computation is performed immediately
and automatically. That is we have those two
components of the sentence and we perform an
operation that links them and compares them and
comes out with a result. This is a good fit,
this is not a good fit, and in some cases, the fit that is not
good in a particular way will bring a smile. There are many conceptual
analyses that we perform, you know on what we hear and what we see, without actually intending to do so. We are constantly on the
lookout for causal connections. So, if we see, if we hear and see events that are related in particular ways, we look for cause and effect, and we do that automatically. We evaluate similarity, as in the Woody Allen and dentists case. We evaluate familiarity and surprise, and very importantly we evaluate emotion. That is, we have an ongoing evaluation, emotional evaluation of the world as good or bad threatening or benign. That goes on automatically. And, we can prove that
it goes on automatically, but not everything gets computed, so there are certain
aspects of the environment that don't get computed. For example, if we show this
for a fraction of a second, you will have seen, or would have seen in a fraction of a second several things. You would have seen, that two of these objects are
more similar to each other than they are to the third one. And here is something
you wouldn't have seen. You would not have seen that there is the same number of blocks in figure 2B as in figure 2A. So, you know it's there, and you know for a computer, if you program the computer to do this, it would be no trick at all
to take that information, and it would be there, or that you could construct the tower, that if you constructed a
tower from the blocks on 2B, it would be as tall as the tower of 2A. We don't compute that. So, we don't compute
everything that we could, and that is it turns out quite important. So, here is a set of lines, actually. And now, here is a
computation that it turns out from recent research actually, by entries of my wife, and among others, it turns out there is a computation that everybody performs
instantly and effortlessly, and that computation is
you get an impression of the average length of
the lines in this display. You get that for free. You don't have to work at it. You can do this, while
thinking of something else. It's a computation that the
mind is said to perform, but now there is something
else that you could ask about, and that we don't compute. What is the total length of
the lines in this display. You have no idea you can do it, but system two will have to do it. That is you can estimate
the average length. You can assess or in
fact get an impression of the number of lines, and then you can multiply. Again, you know, actually, if a computer were programmed to do this, the computer would take the sum, as on its way to getting the average. It turns out we have machinery that can compute the average
but we don't know the sum. If we want to have the
sum, we need to compute it. That turns out to be quite important. This distinction between
assessments that are natural and assessment that are not natural, and that need to be
performed by system two, that is an important distinction
that we'll get back to. Now, let me show you another example of what we call the automaticity
of operations or system one. What you have to do here, is and you know you can humor me. You can say it aloud. I'm going to show you some objects, and say the color of the
objects in the following slides. Okay, so-- - Red.
- Thank you. - [Audience] Green. Yellow. (audience laughing) Okay, this is one of the most the classic experiments in psychology. You have no control over
what happened to you here. That is the reading was automatic, and reading it turns out is comes, you know, calling up
the colors was not hard, but it's not something that
comes very, very easily. There is a lot more effort
in calling out the colors than in reading. When you present the two, the reading comes out automatically, and it preempts the response, so that's an example of
an automatic response. So, here's another characteristic of those two systems. You can come to the
Heathrow Airport in London, and you want to rent a car, and they give you keys, the keys, and they tell you please remind remember that we drive on the left here, and you do mostly. I mean if you're not very
tired, or if you can do it. That ability to adopt
a rule and follow it, that is a system two operation. We can reset ourselves. System one, which is automatic, is much harder to control, and indeed when you get very busy and very preoccupied, you may find yourself driving on the wrong side in London, because you're not, well, not in London, but that on a highway, because you're not actually monitoring closely enough what is going on. Now, so system two is a very quick study. System two can adopt an
instruction and obey, but it is slow in execution. System one is a slow learner, but it's fast in execution. So, what we call
associations between ideas are hard to learn and hard to unlearn. Now, this is important
implications for skill, because skilled operations, the operations of you know what happens, with basketball players
and with chess players, is skilled performance migrates from system two to system one. It begins as a system two operation, and it eventually, when
you become skilled, you no longer have the scaffold
of remembering the rules. You just do it. Okay, now, another characteristic of
these two modes of thinking, and I've already mentioned it. Intuition is effortless, like perception. Reasoning, or system two
operations are effortful. What do we mean by
effortless and effortful? As psychologists, we
mean a very simple thing. Mostly, we mean that if
an operation is effortful, it will interfere with other
operations that are effortful. We have a limited ability to do two difficult things at once, and if we try to do two
difficult things at once, performance will suffer, which is not true when we're dealing with
system one operations. So, that is one of the
characteristics of system one, it is effortless, system two is effortful. And, there is an important
function of system two, which this example will show. So you can follow along. The bat and the ball together cost $1.10, the bat costs a dollar more than the ball. How much does the ball cost? Now, let me make a prediction. Every one of you, I would think, have thought of the number, and the number is 10 cents. That's been in everybody's mind. Now, that makes the problem interesting, because 10 cents is false. You know if it were 10 cents and you, and then the bat will be a dollar more, it would be $1.10, and the $1.10 and 10 cents would be $1.20, so 10 cents is not the answer. Now, what is interesting is when you put that question
to Princeton students, half of them say 10 cents. Now, and they do even
when you give them time. (students laughing) And they don't when you
frighten them, that is you know, when they're really worried
about making a mistake, they don't but otherwise they do. You know even 45 percent
of students at MIT make the mistakes. So, what we learn about
people who make this mistake is that they haven't checked. One of the functions of system two is to monitor system one. That is we don't say
everything that comes to mind. We say only a fraction
of what comes to mind, sometimes too much, but we do monitor ourselves. The monitoring however, is very casual, and this example illustrates
how casual the monitoring is. It illustrates you know that well, you know, that number comes to mind, and it looks plausible,
and it is plausible. It's about one dollar
less and out it goes. That is it turns out very important, and if you keep people busy, if you for example, load
their memory with stuff so that they have to
remember a seven digit number while doing something else, then I wouldn't say that
their behavior collapses, but their behavior changes
in interesting way. So, for example they will become much less politically correct. So, people who are holding a seven digit number in their head use words that are not quite so nice. They use lady, they use girl, whereas you know otherwise
they would use woman and sort of be more attuned to with what they're supposed to do. People who hold a seven
digit number in their head are more selfish than if they don't. So, system two does monitor system one, and when you interfere with
the ability of system two to monitor system one,
performance changes. This there are fades of
Freudian psychology here, but this is the modern version
of Freudian psychology. Now, and finally, associative
versus rule-governed, that is and I'll illustrate in a minute what I mean by by associative, but I want to describe the mind as a machine for jumping to conclusions. This is, when we get very fast answers to complicated questions,
we're jumping to conclusions, and that is a very important
function of the mind. And, whether we do it skillfully or not is a separate question, whether we do it accurately or not, whether it's important
that this is what we do. Let me illustrate that. Okay, let me tell you a few of the thing that happened to you
within a second or two of my showing this word. And every one of the thing
that I'm going to say has been confirmed in research, so it is actually a fact that
although things happen to you, and not happen actually. Now you couldn't help reading the word, so you have no control of
course you read the word. Now, your mind was probably, there were images that came to your mind, images and memories. I can predict that they were not pleasant. Now, your body reacted,
your pupils dilated, your heart quickened, your you know sweat, you sweated a bit, you know, all this very weak, but all of this has been confirmed. I mean this is part of
the reaction to this word. More than that, we could
have measured your face, and it twisted a bit in
an expression of disgust. If we took a picture of
the faces in the audience within a second or two of this word, that would have been disgust. There is more, you recoiled. There is evidence that people's body, that their posture reacts and is different when a word that is shown
is pleasant or unpleasant. We approach pleasant words. We recoil from unpleasant words, and there is evidence
that this too happens. You became more alert and
more vigilant than usual. And then if we presented
words in a whisper and asked you to recognize them, there is a whole set of words
to which you would have, you are now, you probably
still are unusually sensitive at words like smells,
stinks, sick, nausea. All of these words, a
whole cluster of words, you are now prepared to recognize more than you were before. This is an incomplete list
of what happened to you, and it illustrates one
of the basic mechanisms of system one. That's the spreading of ideas
in a network of associations. And the impressive aspect of
all this complex of reactions that happen to you within a second or two is its coherence. And, this is not an accidental
grouping of reactions. It all makes sense, your
reaction to that single word. Now the connections between the elements of this complex of reactions
are not necessarily logical, but ideas that have been
correlated in experience facilitate each other. Now, this mechanism of coherence, prepares you for things then
and it evaluates surprises, and I can't help but tell you a story of how this other system works. Well, some years ago, we were spending vacation
on a resort island on the Barrier Reef, next to Australia. It's a small resort, there
40 rooms in that resort. And, we went to dinner the first evening, and lo and behold there
was a psychologist I know. He's at Stanford, and we were
both really quite surprised, you know fancy meeting you here. And, that sounded like a coincidence. And then two weeks
later we were in London, going to the theater, and you know we, theater was dark, somebody
sat to me, next to me, and I didn't see who it was until the lights came up
during the intermission, and it was the same person. And, the thing I want to point out is I was less surprised the
second time than the first, because you know what what
I computed immediately was all oh, oh that's Crosnik, Oh, I meet him all over the place. (students laughing)
I mean the point is, we create those associations
very, very quickly. We create, we're ready to form
expectations very quickly. This is part of the this mechanism, of this associative mechanism. And you know, sometimes
it will work better than at other times. Now, this system guides interpretations, so let me show you how this works. Now, you can read that silently
and you know what that is, and you can read that and
you know what that is, and of course that B and the 13 are physically identical. I mean it's the same object, but you were not aware of this. The context determined
how you read the words and you were not aware
that there was ambiguity. You were not aware that you had suppressed the ambiguity. You were not aware that
there was another way of seeing what you saw. The way the system works,
it suppresses ambiguity. Doubt and ambiguity is mostly
a function of system two. System one doesn't have much doubts. It makes choices and it delivers
to a conscious experience the choices that it has made and we're not aware of
having made the choice. Well, many of you are
familiar with the subject which tells us you can see it in two ways. I guess all of you are
familiar with the subject, but the important thing is you don't see it in two ways at once. You see it in one way, a choice is made. There are two interpretations, and in this case they alternate. So, those are some
properties of the system. This associative machinery has incredible richness and subtlety. That is the feats of
intuition that we see, attest to the subtlety
of our mental models. I mean you know, if I can ask you, who is more likely to play bridge and who is more likely to play poker, a Wall Street banker or
an English professor? Now, most of you know the answer. There is an answer in the culture. We would expect the Wall
Street banker to play poker and the professor to play
bridge, if this is what they do. There is consensus on
answers of this type. It's all part of the mental model. It is probably not a question you have ever asked yourself before, but through the network of associations, that in this case is widely
shared within a culture, we can come up with answers to such, to an infinite number of such questions, in you know, the twinkle of an eye, literally in a blink as
Malcolm Gladwell would say. We're not prepared to ask
how skills are acquired. Skilled activities that can
be performed automatically are acquired through an
enormous amount of practice. For to acquire, to become a chess master, the estimate is that it takes
10,000 hours of practice, and about the same estimate to become a very good violinist. It is estimated that learning chess, that a chess master has acquired somewhere between 50,000 and 100,000 discrete configurations of
pieces that are meaningful, and that by putting
together these elements, the chess master is able to construct a representation of the situation, but practice is not enough. What is needed for the
acquisition of skill, is appropriate feedback, and it's feedback of success
and failure in general, and the feedback must be
immediate and unequivocal. If the feedback is very
delayed or ambiguous, then learning is retarded
or doesn't occur at all. Now, there are some kinds
of situations in learning, where feedback is not essential. An example, in the learning of threat. So you can, you know, a child does not have to be burned by fire to acquire a fear of fire or
a fear of crossing the street. So, we can condition and teach emotions, and this is part of the learning of skill. And now, I think we may begin to see why stock pickers and CIA analysts do not have the opportunity
to develop skills, or at least skills that are much superior to those of lay people. The feedback that they
get on their guesses and their impressions and their judgments is neither immediate nor unequivocal. The feedback is very much delayed. The system that they're dealing with are extremely complicated, and especially there is
actually no feedback, no immediate feedback. And because of that,
there are no opportunities to develop the same kinds of skills that firefighters can develop, and nurses, and chess players. Now, so I'll describe the
machinery of system one, and why it can, you know I've suggested, why it can acquire some skills
and cannot acquire others. Now, this is really wonderful machinery, but it's also incompatible with a basic requirement of rationality. That is and now I'm going back to the first of the conversations
I mentioned earlier, the conversations with economists. What we find is that in
the system like that, which is associatively coherent, the way that you describe outcomes and the way that you describe problems is going to make a
great deal of difference to how people respond to them. So if you describe the statistics
of medical interventions, like surgery and radiation therapy, it makes a difference whether you say that during the first month after surgery, there is 10 percent mortality, or that survival rate after
one month is 90 percent. The first description is more frightening than the second description, because the word mortality is there, and the word survival is in the other. And when you have that, indeed physicians, experienced physicians, will make different choices between surgery and radiation therapy, depending on which of these
formulations is adopted. We have no control over that. This is an operation of that
displays associative coherence. You can present the same difference. It used to be legal in the early days of behavior economics, it was still legal to have
two prices at the gas station, a different price for cash and for credit, and you could describe that either as a cash discount or
as a credit card surcharge. Now obviously, one of these descriptions is much more favorable to
credit cards than the other. A cash discount, we can forego. A surcharge, we hate, and it's at that level
extremely powerful response, and there is really no way to combat this, but it is assumed in the
theory of the rational agent that this does not occur. So, in that sense, the
theory of the rational agent, you know, in the argument
that psychologists have had with economists is a non-starter. It really does not recognize the way that system one operates. So these are framing it and framing it. Then there are other
kinds of effects that... Okay, this is a history
class quiz and here it goes. I'll go quickly. Write down the first three numbers of your home phone number. Add 400 to this number. Consider this total to represent a year in the common year. Do you believe that Attila
the Hun was defeated in Europe before or after this year? Okay, and now what is your best guess about the year of Attila
the Hun's defeat in Europe? Now let me show you the results. Those are the estimates
of, that people gave as a function of their telephone number. (students laughing) So, the people whose telephone numbers ends with high digits end
up with a high estimate of the year of the
defeat of Attila the Hun. What is the mechanism here? Well, I've described the mechanism. It's associative coherence. When you have that number in mind and you know, you're asked,
is it too high or too low, you just think about that number, just having that number in mind causes you to bring together associations, whatever, that make this number more plausible. This is just part of the
way we work, our mind works. We are presented with information. We try to understand it. We don't try, system one
tries to make sense of it by associative coherence. And so, we end up with a
different view of history depending on whether our telephone numbers ends with a high number
or with a small number. Let me give you another example of this. In different countries in Europe, there are, I think, eight countries where the default option
for organ donation is that you donate your organs in case of death in an accident. There are six countries
with a default option is not to donate. In all 14 countries,
basically what you have to do, if you want to do
something is to check a box and choose something that
is not the default option. What you have here is the proportion of people
who donate their organs in the two groups of countries. It's an enormous effect. Now, this is a highly
consequential decision. And basically we people do if they see a default
option, they take it. This by the way is part of
anything that is presented to us, we have a strong tendency to accept. We're enormously suggestible and that is part of the
way that the mind works, and we can be manipulated
by setting default options to quite a remarkable degree. Let me just recap and tell you a bit about what I would have said if I was going to waste 10
more minutes of your time, but I won't actually go through it. It turns out that there
is a very simple way that we go about jumping to conclusions and this is by answering
a question that is easier than the question we were asked. It's a mechanism that psychologists know the mechanism heuristic thinking. You are asked a complicated question. You answer a simple one, and usually that works very well. Very often it works very well. Sometimes it leads you completely astray, but that is a mechanism, and we are typically not
aware that we're doing this. So, let me just show you one picture. Okay, and the question here is what are the size of the
figures on the screen? Now, they're of equal size. Now, they don't look as
if they're of equal size. They just you know, one of them looks to be larger than the other two. What do we do, what's happening here? Well, when we look at this object, we see a three-dimensional scene. Within a three-dimensional scene, the most distant person is by
far taller than the others, so we have an immediate answer to the question of whether
the three-dimensional size of these three objects. That's a natural assessment. Now, that's not the
question you were asked. The question you were asked was, what is the size on the screen, but there is an answer to an
easier question, far easier. It thrusts itself where you don't want it, and that's the one you see. And it turns out that this mechanism is at work all over the place
in cognitive operations, and it causes us to, it causes people to
have very quick answers to question that really
baffle the experts, and that's typically
because people are answering not the question that the
experts is trying to answer, but a different and
somewhat easier question. So, what have I tried to do? I've tried to acquaint you with a way of thinking about intuition, which distinguishes two
systems in the mind, system one and system two. I've spoken of system one, which is the system, the intuitive system. I think the principle
idea that I've proposed is this notion of associative coherence, which explains a great
deal of the workings of this mechanism. And, the other issue that I've raised, not fully satisfactorily answered, is that if we want to understand
what skills are required, and what skills are not acquired, we need to look very
carefully at the interaction between the environmental, the conditions in the environment, and the feedback the environment provides, and the opportunities for learning in the associative network. Thank you.
(students applauding) (upbeat music)
Kudos to Berkeley for keeping an intro from 1987.