Cognitive Biases

Cognitive biases are systematic errors in human judgement — patterns of deviation from rationality that recur predictably across people and contexts. They are not random noise; they are directional and repeatable, which means they can be anticipated, named, and partially corrected.

Primary source: kahneman-2011-thinking-fast-and-slow (Kahneman, 2011).


Origin: The Heuristics-and-Biases Programme

daniel-kahneman and amos-tversky launched this research programme at Hebrew University in the early 1970s. Their founding insight: people use heuristics (efficient cognitive shortcuts) that work well on average but produce systematic biases in specific conditions. Earlier models assumed roughly rational agents; Kahneman and Tversky documented the precise, reproducible ways rationality fails.


Taxonomy of Major Biases

Biases from the Representativeness Heuristic

Judging probability by resemblance to a prototype or narrative:

BiasDescription
Base-rate neglectIgnoring prior probabilities when a vivid case description is available (the Steve-the-librarian example: ~20× more male farmers than male librarians in the US, yet “meek and tidy” makes people say librarian)
Conjunction fallacyJudging “Linda is a feminist bank teller” as more probable than “Linda is a bank teller” because the conjunction is more representative
StereotypingAssigning individual probability based on group resemblance, ignoring individual variation

Biases from the availability-heuristic

Estimating frequency by ease of recall:

BiasDescription
Availability cascadeA story that spreads widely becomes perceived as common/probable, independent of actual base rates
Affect heuristicEmotional salience (fear, disgust) inflates perceived frequency of vivid risks (plane crashes, shark attacks) vs. dull risks (car accidents, heart disease)

anchoring-bias

Initial numbers influence final estimates even when the anchor is arbitrary (a wheel-of-fortune number affects estimates of African UN member percentages). Adjustment from anchor is always insufficient.

overconfidence Biases

BiasDescription
Illusion of validitySubjective confidence in predictions exceeds their empirical accuracy (stock pickers, interviewers)
Planning fallacyProjects completed later and over-budget because inside-view planning ignores reference class (base rate of similar projects)
Hindsight biasAfter the fact, past events feel inevitable; inflates confidence in future predictions

Biases from prospect-theory

BiasDescription
loss-aversionLosses weigh ~2× gains psychologically; drives extreme risk-aversion for losses, risk-seeking to avoid locking in a loss
framing-effect”200 lives saved” vs. “400 lives lost” from the same policy produce opposite preferences
Sunk cost fallacyPrior investments (sunk costs) influence continuation decisions even though only future costs/benefits are relevant
Endowment effectOwning something makes it feel more valuable; people demand more to give something up than they would pay to acquire it

Overconfidence and Narrative Biases

BiasDescription
Narrative fallacyPeople prefer coherent causal stories to accurate probabilistic accounts
WYSIATI (What You See Is All There Is)System 1 builds high-confidence conclusions from incomplete evidence, ignoring what is absent
Halo effectA positive impression in one dimension generalises to all dimensions (attractive people seem more competent)
Regression neglectAttributing causal stories to statistical regression toward the mean

Why Biases Persist

  1. Evolutionary origin: most heuristics are adaptive in natural environments; biases occur when applied to statistical or financial contexts the mind did not evolve for.
  2. Immediate vs. delayed feedback: System 1 learns from rapid feedback (social cues, physical danger); it does not self-correct in domains with slow or absent feedback (financial markets, policy effects).
  3. Emotional charge: System 1 tags events with emotional intensity, not probability; vivid events feel probable.

Correctives

ProblemCorrective
Base-rate neglectForce consideration of reference class before making predictions
OverconfidencePre-mortems; track record analysis; reference-class forecasting
AnchoringGenerate your own estimate before seeing any anchor number
Loss-aversionReframe decisions as policies (repeated choices) rather than one-off gambles
Planning fallacyUse outside view: how long did similar projects actually take?