Saturday, July 17, 2021

Chapter review: Noiseless Rules; Objective Ignorance; The Valley of the Normal

Context
This is a review of chapters 10-12 of the 2021 book, Noise: The Flaw in Human Judgment, by Nobel laureate Daniel Kahneman et al. These chapters mostly elaborate on the concepts raised in chapter 9, Judgments and Models, which was reviewed here yesterday

Noise is written for a general audience, but it relies on core concepts in statistics to make its points. The book’s main point is that humans are surprisingly noisy in their judgments. In the context of judgment or prediction science, noise refers to being randomly wrong. That kind of error is different from being wrong due to bias, which is non-random, or generally predictable error. Humans are both biased and noisy, both of which are hard to be self-aware about. People who do not understand that, cannot understand the human condition generally or politics in particular.

Some of the relevant statistics concepts are a bit subtle and/or counterintuitive, but they are explained clearly using limited technical language. Some modest understanding of relevant statistical concepts is necessary to understand why (i) most humans, especially self-proclaimed demagogues, political experts and ideologue blowhards, are usually surprisingly bad at making judgments, (ii) we are so confident in our judgments when there is no objective basis for confidence, and (iii) no machine, software or human can ever be perfect in making judgments. Regarding point (i), humans are so bad that most barely do better than random guessing most of the time, including most experts dealing with their own area of expertise. One leading prediction science expert, Philip Tetlock, summed it up like this in his 2005 book, Expert Political Judgment
“The average expert was roughly as accurate as a dart-throwing chimpanzee.”
That statement was based on about 20 years research and thousands of judgments made by hundreds of experts in various fields.[1]


Chapter 10, Noiseless Rules
Algorithms: Simple and complex models of judgment and behavior are all algorithms. The rules that algorithms are built on are noiseless. Applying algorithms to decisions applies analysis that is noise free. Algorithms can be wrong or flawed to a varying extent, ranging from barely flawed to useless, but at least the output is free of noise. The quality of an algorithm depends on how good the input rules are. If the rules are reasonably good, the algorithm output is reasonably good, usually better than human experts. If not, then not.

Algorithms do not have to be complicated. They can be based on just one or two rules, e.g., rank job candidates with high communications skill and/or a high level of motivation higher than other job candidates. One can run the numbers on that algorithm using the ancient fingers and toes technology for scoring people. It isn’t rocket science. It’s common sense. One just needs to understand the human condition regarding judgment and what algorithms are and can do.

The machines rise up and overthrow humans (The Terminator): No, that is not going to happen. Kahneman is not arguing to replace human judgment with algorithms.[2] One can base decisions on input from both humans and algorithms. Sometimes humans know things an algorithm can’t, e.g., golly, this job candidate scores really high on communications skill and motivation, but jeez, he is Larry Kudlow[1] and the job requires excellent economic judgment -- let’s not go there -- bad algorithm, bad, bad algorithm. 

Kahneman calls this the ‘broken leg’ scenario: The broken leg algorithm (rule): If someone breaks their leg during the day, they are unlikely to go to the movies anytime later that same day. Therefore, one can predict that a broken leg tells the human to override the algorithm.

Artificial intelligence (AI): Kahneman points out that humans can come up with good rules for simple but effective prediction algorithms, AI does something that humans cannot. Specifically, AI working with tons of data can spot, describe and evaluate patterns in the data that humans simply cannot see. AI can spot broken legs that humans cannot see. Some of those patterns or broken legs are useful as rules in prediction algorithms. The data here is consistent. In terms of prediction ability there is a ranked order: 
Most experts and chimpanzees < most simple algorithms < most modestly more complicated algorithms (improper linear models) < significantly more complicated algorithms (linear regression models) < most machine learning algorithms (AI models based on huge data sets that contain hidden broken legs)

Chapter 11, Objective Ignorance
This chapter gets at some concepts that have been of intense personal interest ever since I read Tetlock’s 2005 book, Expert Political Judgment,[3] and his 2012 book, Superforecasting: The Art and Science of Prediction.[3] Specifically, what is the outer limit of human knowledge, how far into the future can human knowledge project and what defines the limit? What defines the limit is lack of knowledge, in particular (i) unknown unknowns and (ii) self-inflicted ignorance, e.g., lack of moral courage needed to honestly face reality. These factors limits human projection into the future to about 24 months or less, probably mostly to about 9-15 months. Predictions farther out in time tend to fade into random guessing or chimpanzee status.

The human element -- denial of ignorance: Objective ignorance is fostered by some unfortunate human traits. In particular, humans who have to make decisions and are successful tend to gain in confidence in their ability to be right. They also tend to be expert at rationalizing failed predictions into successes or near successes. Kahneman comments:
“One review of intuition in managerial decision making defines it as ‘a judgment for a given course of action that comes to mind with an aura or conviction of rightness or plausibility, but without clearly articulated reasons or justifications -- essentially knowing but without knowing why.’ .... Confidence is not guarantee of accuracy, however, and many confident predictions turn out to be wrong. While both bias and noise contribute to prediction errors, the largest source of such errors is not the limit on how good predictive judgments are. It is the limit on how good they could be. This limit, which we call objective ignorance, is the focus of this chapter. .... In general, however, you can safely expect that people who engage in predictive tasks will underestimate their objective ignorance. Overconfidence is one of the best documented cognitive biases. .... wherever there is prediction, there is ignorance and more of it than you think.”
Kahneman goes on to point out that people who believe in the predictability of things that are simply not predictable hold an attitude that amounts to a denial of ignorance. And, he asserts that “the denial of ignorance is all the more tempting when ignorance is vast. .... human judgment will not be replaced. That is why it must be improved.”

Jeez, that’s not very comforting, especially when it is manifest in stubborn, self-centered, ignorant political and business leaders. Does flawed human judgment constitute an existential threat to modern civilization, and maybe even to the human species itself? I came to the conclusion and belief long ago that it does. Tetlock helped me see that possibility with clarity. And Kahneman reinforces it again. 


Chapter 12, The Valley of the Normal
Humans are lulled into overconfidence by routine life with few surprises. It’s a great big valley of normalcy, at least in rich countries in times of relative peace and stability. Kahneman raises a huge red flag about the limits of what the social sciences actually know, regardless of what experts say or think they know. Social science research results rarely are better than ones that allow predictions of how two variables move in tandem (concordance) ~56% of the time at most (correlation coefficient ~0.20).  Randomness is 50% concordance (correlation coefficient ~0.00, i.e., no correlation or relationship beyond random chance between measured variables). In other words, the social sciences are able to understand much less about the world it studies compared to physicists who can see concordance at ~70% (correlation coefficient ~0.60) in their data. 

Objective ignorance causes us to understand less because social science research data usually doesn't explain much about the hyper-complex, unpredictable real world.  Once again, how the human mind deals with incomprehensible reality is front and center -- we routinely see causes in things for which is there is no basis to know the cause of an event:
“More broadly, our understanding the world depends on our extraordinary ability to construct narratives that explain the events we observe. The search for causes is almost always successful because causes can be drawn from an unlimited reservoir of facts and beliefs about the world [we unconsciously apply hindsight to explain the inexplicable]. .... Genuine surprise occurs only when routine hindsight fails. This continuous interpretation of reality as it unfolds consists of the steady flow of hindsight in the valley of the normal. As we know from classic research on hindsight, even when subjective uncertainty does exist for a  while, memories of it are largely erased when the uncertainty is resolved.”

Humans routinely employ causal thinking to the world and events that we experience. We do this to limit the cognitive load needed to make sense of the routine and to spot abnormalities, especially threats. This is part of the mostly unconscious clinical thinking that humans rely on most of the time, as discussed in the review of chapter 9.  By contrast statistical and mechanical thinking (algorithmic, statistical) require discipline and lots of conscious effort. Kahneman comments:
“Relying on causal thinking about a single case is a source of predictable errors. Taking the statistical view, which we will also call the outside view,[4] is a way to avoid these errors. .... The reliance on flawed explanations is perhaps inevitable, if the alternative is to give up on understanding the world. However, causal thinking and the illusion of understanding the past contribute to overconfident predictions off the future. .... the preference for causal thinking also contributes to the neglect of noise as a source of error, because noise is a fundamentally statistical notion.”
A final concept that Kahneman makes clear here is this: Correlation does not imply causation, but causation does imply correlation. The higher the correlation, the more we understand about what we observe. We often believe we understand events, but we generally cannot predict them, implying we do not really understand them.


Footnotes: 
1. In his 2012 book, Superforecasting: The Art and Science of Prediction, Tetlock singled out two self-professed blowhards (experts) as having exceptionally awful judgment. One was Larry Kudlow, the ex-president’s Director of the National Economic Council and the other was his short-lived national security advisor, Lt. General Michael Flynn. Flynn is now a twice self-confessed and convicted (but pardoned!) felon who spews lies and slanders no gullible people for a living. Kudlow moved briskly on to host a Fox News financial program. He is proudly and confidently spewing bad advice on the poor people who listen to him mindlessly bloviate for a living. 

That is some real world evidence that bad judgment includes not being able to pick competent people for really important jobs. That exemplifies just one of the non-trivial reasons the human species finds itself in the precarious situation it is in today, i.e., too often humans exercise bad judgement.

2. But really folks, given the data, sometimes humans should be replaced by algorithms. Some humans are real stinkers in terms of judgment ability, e.g., Larry Kudlow.

3. After reading Expert Political Judgment and what it taught finally sunk in, I almost gave up on politics entirely. Humans aren't just chimpanzees, they are stubborn and arrogant chimpanzees who simply cannot handle the reality of their own deep flaws. Those cognitive and social flaws are inherent and unavoidable from human evolution. What kept me going was Tetlock’s second book, Superforecasting, which showed that some humans can learn to rise above their evolutionary heritage and actually translate knowledge into at least modestly better political outcomes.

4. Kahneman’s outside view, seems to get at what Thomas Nagel called the view from nowhere when he tried to envision reality without reliance on the reality-distorting lens the human mind and body constitutes. The idea of what unfiltered reality actually looks like is a fascinating question.

No comments:

Post a Comment