Daniel Kahneman‘s new and fantastic book Thinking, Fast and Slow is a fascinating journey into an intellectual career spanning more than forty years. Kahneman, who won the Nobel prize for his work on rationality with Amos Tversky, presents a lifetime of research and findings into human rationality and its fallacies in a coherent, intriguing and convincing way. It is a book I would wholeheartedly recommend to anyone regardless of the context of criminal justice. Kahneman and Tversky’s ideas on rationality, however, have special bearing on issues of criminal justice policy, and the book might therefore be particularly interesting to this blog’s readership.

To fully understand the novelty Kahneman’s (and Tversky’s) Nobel-winning ideas, it is important to keep in mind that they were generated against the backdrop of very traditional ideas of human rationality in economics. Classic economic theory assumes a human subject who is fully rational, fully knowledgeable, and operates within a framework of cost-benefit analysis. Kahneman and Tversky, students of human behavior rather than of economics, devoted their careers to questioning and refining this model of human cognition to accommodate flaws and fallacies in rationality, revolutionizing the field of economics and enriching it with empirical insights about the actual and irrational workings of human behavior patterns. Which is how a psychologist ended up receiving a Nobel prize for economics.

Kahneman introduces his ideas to the public through a fresh perspective that serves as the leitmotif of the book. Our thinking, he argues, is characterized by two modes, or systems, if you will. System 1 is responsible for the quick-and-dirty judgments and conjectures that allow us to instantaneously make sense of the world. When more effort is needed, System 2 snaps into action, and engages in the complex thinking required to solve problems or think outside our cognitive box. The problem is that System 2 is lazy. It does not come into play unless it absolutely must, and it takes an effort to engage. So, our default mode is to slack and allow System 1 to do our work for us. The result is that we generate our opinions about the world in ways that rely on shortcuts, assumptions, stereotypes, overly causal interpretation, and anchors, that are flawed and lead us to making a myriad of mistakes.

Kahneman proceeds by mapping for us, chapter by chapter, a series of these fallacies. Among the heuristics and biases he mentions are the halo effect (forming an opinion of something based on one or two qualities and extrapolating), What-You-See-Is-All-There-Is (WYSIATI – relying on whatever information is available, no matter how flimsy and unreliable), anchoring (linking our assessments to whatever number is thrown out, no matter how improbable), substituting difficult questions for easy ones, ignoring base lines, ignoring regression to the mean, and creating overly causal narratives for things that could be accounted for through pure chance. He then walks us through the impact these fallacies have on professional decision making, and finally through his more recent work on happiness.

The book is fascinating for anyone who is interested in understanding human behavior, but I found its implications for criminal justice policy particularly startling. The insights on flawed rationality can explain not only public punitivism and voter initiatives, but also the flawed behavior of professionals: judges, prosecutors, and defense attorneys. Here are some of the many examples of possible applications.

A recent Supreme Court decision grappled with the question how to prevent injustices stemming from the prosecution’s failure to comply with the Brady requirement to disclose to the defense “any exculpatory evidence”. The assumption made by the Court is that monetary compensations to exonerees who were wrongfully accused without an opportunity to receive evidence in their favor are only effective when prosecutors acted out of malice. In a paper I presented at a Constitutional Law conference in Chicago, following Kahneman, Tversky, and a solid body of behavioral research, I suggest that many Brady violations may not be attributable to anyone’s fault, but rather to confirmation bias: Prosecutors and defense attorneys simply read evidence differently, and prosecutors, given their professional environment and their pro-government bias and socialization, are less likely to view evidence with an eye toward its exonerative potential. I’m in the process of devising a study to examine the existence and extent of confirmation bias in prosecutorial and defense perception of evidence, as well as its causes.

Another big area where heuristics and biases are important is sentencing. Kahneman’s book is full of examples of flawed decision making due to chance issues. Notably, he cites a series of studies comparing judicial decision making to those of computer algorithms, finding that the computer makes less mistakes. But he also shows how judges making parole decisions tended to be more generous in terms of release immediately after eating, when their ability to access System 2, and their cognitive ease, were at their prime. This is, of course, greatly disturbing, and a factor to keep in mind when thinking of the strong judicial opposition to sentencing guidelines and any form of diminished discretion. Contrary to the bon ton in today’s analysis of the correctional crisis, it may well be that sentencing guidelines and the diminishing discretion of judges were not a fatal decision reached by overzealous punitive right-wingers and misguided left-wingers, but rather a good decision, whose adverse effects are not due to the decrease in judicial discretion, but due to the increase in prosecutorial discretion.

Another important implication of al this risk prediction and algorithms. Kahneman’s experiments strongly support favoring the quantitative tools used by various correctional systems, including CDCR, over the sort of clinical risk assessments popular in the early 20th century. The concern we have with giving machines the power to assess individuals’ risk based on stereotypes may be exaggerated, Kahneman’s work suggests. Humans may make more serious mistakes, and reliance on past predictors of recidivism or parole violations are more reliable than intuitive impressions of trust and sympathy.

An area I find particularly compelling is the study of public punitivism, and prospect theory could have a field day with what we know of this. A decent argument can be made that much of what passes for public decision making in the field of voter initiatives is System 1 work. First, the public’s reliance on “redball crimes” – shocking instances of horrifying, sensationalized crimes, that receive a lot of media attention – is a prime example of WYSIATI. Rather than engaging with statistics that expose the entire picture of crime reality, we rely on what is salient and reported, rather than with what we know to be truer. Moreover, much of the punitive legislation against sex offenders might be an example of substituting difficult questions with easy ones. Rather than thinking what sort of punishment sex offenders deserve, or how many resources to invest in punishing them, or which measures would reduce recidivism, voters may be thinking on how much they dislike sex offenders. A System 1 mechanism of “translating scales” converts the extent of dislike and revulsion to a measure of punishment, and punitive voter initiatives are born and passed as law.

There could be many more examples of possible applications, and I’m happy to entertain some of these in the comments. i just want to add a  final note on the delights of Kahneman’s book: What distinguishes this book from other popular behavioral science books, such as Dan Ariely’s Predictably Irrational, or Malcolm Gladwell’s Blink, is not only its quality–Kahneman respects his readers, does not oversimplify, and happily shares the depth of his intellectual process, which places this book in a class of its own–but the moving, nostalgic tribute it makes to the working partnership and decades-long friendship between him and Tversky. As many friends who have collaborated on research projects know, the relationship between collaborators is unique and special; the curiosity and give-and-take of the work creates a strong bond. The book is a love letter to Tversky and to the two researchers’ community of students and colleagues. One can almost walk side by side with Tversky and Kahneman, listening in on their conversations and debates, witnessing the generation of ideas sparked by their easy, friendly conversations, and feeling the parental warmth of their respect and enthusiasm for the success of their intellectual children and grandchildren: professors, postdocs, and graduate students. It is a pleasure to enjoy this additional dimension on the book, made more poignant by the heartbreak over Tversky’s untimely death at 59 in 1996, six years before the Nobel prize win. And it is a reminder of how important it is to appreciate one’s scientific community, or scientific family, and its contributions to one’s intellectual and emotional life.

—————-
Many thanks to Haim Aviram for our discussions about this post and to Robert Rubin for the recommendation.

Recommended Posts

2 Comments

  1. hi hadar
    i read Gladwell's books and enjoyed them very much, i understand from your review that this one is a level above, so i'll be glad to read it

  2. I have been reading many books last few years on how we make decisions and how emotions play a major role in them. All of them mentioned Daniel in their books. So it was but natural for me to read the original and wow! it is the book which really lays out our decision system scientifically. This is one of the book which will require re-reading.


Add a Comment

Your email address will not be published. Required fields are marked *