Drawing
early conclusions is effective if these conclusions are likely to be correct,
if the cost of an error is low, and if it saves time and thought. Drawing early
conclusions is dangerous when the situation is unusual and the stakes are high.
In these circumstances, the risk of error is high and can be prevented by an
intervention of System 2.
Context is
important in defining the interpretation of each element. The form is ambiguous
but one jumps to conclusions directly without even being aware of the ambiguity
that has been resolved or avoided. System 1 deals with this.
"Ann
approached the bank"
In our
heads, we imagine Anne walking quietly towards the bank. We imagine a large
building with safes, counters, etc. But again, it's a question of context.
If the
previous sentence were: "They were floating gently down the river".
We would have imagined a very different scene. Because as a result, the term
"bank" is no longer associated with the building, with money, but
with the river.
System 1
automatically generated a context, the one that seemed the most plausible and
natural. It literally made a bet by creating the most likely context, based on
our experiences.
What enters
into System 1's creation of this context is current experience and recent
events. The choice of context was made without even being aware of it. System 1
keeps no trace of the other possibilities, so one never has the impression of
making this choice. It makes a choice without doubt. Doubt and uncertainty belong
to system 2.
A Bias
to Believe and Confirm
For Daniel
Gilbert, in "How mental systems believe", when we hear or understand
a new idea, we first try to believe it. We imagine this idea in our head. Then
there is a verification process. You must know what the idea means, whether it
is true or logical. And that's when we then decide whether to believe it
definitively or reject it.
This
initial acceptance of the idea, this willingness to believe in it, is what
System 1 does. We try to construct a logical idea of the thing in our mind.
Even if the idea in question is totally illogical. For example, I'll use
Gilbert's example, if I tell you that goldfish eat candy. You're automatically
going to gather what you know about candy and goldfish, combine that knowledge,
and you're going to deduce that it's impossible. So, you're going to decide to
reject that idea. It's this verification process, which Gilbert calls
unbelieving, that involves system 2.
He had
tests done. He showed participants sentences that were totally false, such as
"Lizards love to play Sudoku", and asked them to say whether it was
true or false. Of course, no one was wrong. He then repeated the test, asking
the subjects to memorize a sequence of numbers at the same time. With their
System 2 busy, the participants had difficulty answering the question
correctly.
The moral
is that when system 2 is busy on other tasks, you believe anything. System 1 is
biased and tends to believe absolutely everything. System 2 distinguishes right
from wrong but is sometimes busy or saturated. It is even lazy. That's why we
tend to be more receptive to advertising when we are tired or depressed.
There is
also what the author calls "confirmation bias". For example, when you
ask the question "Is Sam friendly? ». You will not get the same answer as
if you had asked "Is Sam unfriendly? »
Exaggerated
emotional coherence (halo effect)
If you like
the president's politics, you'll also like his voice, his appearance, etc. And
vice versa. We tend to like or hate everything about someone we like or hate,
even things we haven't even seen. It's the halo effect. It influences our
perception of people and the world.
For
example, who is the most likable person? Probably Alan. And yet they have the
same traits. It's the first term on the list that changes the meaning of the
following and makes us tend to like Alan more than Ben.
To remedy
this halo effect, the author recommends decorrelating the error. For example,
James Surowiecki did an experiment. He put several subjects in front of glass
jars filled with coins and asked them to estimate the number of coins.
Individually, the results were very bad. Some were far too much above the
count, others below. But by averaging the results, the result was very close to
the count.
So the
moral of the experiment is that when you're looking for information, you have
to draw on as many sources as possible to arrive at accurate conclusions. What
is important to decorrelate errors is also to isolate the sources. Sources that
are in contact will influence each other. For example, witnesses to a crime, if
they talk to each other before they are heard, will tend to modify the
testimony they would have given, because they are influenced by each other when
they witnessed the same events. Or, before each meeting, the author recommends
that everyone write down their position on the subject on a piece of paper to
avoid being influenced by those who speak.
System 1
only deals with the information it has and does not feel the need for
additional information even if it is necessary to understand the situation.
This is what the author calls WYSIATI: What you see is all there is. And this
WYSIATI creates cognitive biases.
Overconfidence: for example in a judgment. A
portion of the jury has access to the testimony of both parties. Another has
access to the testimony of only one side. And it doesn't actually bother them
at all. They don't feel they're missing any part of the story and make a
decision without regret or hesitation. What matters is not that there is a lot
of information or that it is of good quality, but only that the story told is
coherent. It is then believed by our system 1. And our lazy System 2 does not
feel the need to doubt.
Framing
effects: Between
"the chances of survival are 90%" and "the mortality rate is
10%", people will be more reassured by the 1st turn of phrase.
Commentaires
Enregistrer un commentaire