Friday, August 9, 2019

Chapter Review: Reason

“So convenient at thing it is to be a rational creature, since it enables us to find or make a reason for everything one has a mind to.” Benjamin Franklin, polymath, diplomat, humorist, Founding Father, 1706-1790

“Those who are governed by reason desire nothing for themselves which they do not also desire for the rest of mankind.” Baruch Spinoza, Dutch philosopher, 1632-1677



This discussion reviews Reason, which is chapter 21 of Steven Pinker's 2018 book, Enlightenment Now: The Case for Reason, Science, Humanism and Progress. Pinker is a professor of psychology at Harvard. In this chapter, he defends the use of reason as a necessary component if, among other things, one thinks it is possible that politics maybe can be more rational than it is now.

Pinker paints a picture of reason, defined roughly as evidence- and logic-based thinking, as (i) having been critical to the development of modern civilization, and (ii) in need of defense despite its obvious role in improving nearly all aspects of life for centuries. He goes on to address one area of life where reason has been significantly derailed in a “flaming exception” to progress, tribal politics, and how the problem might be ameliorated. He argues that it is possible to significantly defuse the irrationality of tribal politics in part by greater reliance on modern teaching methods in debiasing and critical thinking, and in part how modern media and social institutions can deal with politics in a less political and thus more rational way.

Digression - correcting an error: Pinker contradicts an assertion made here on several occasions, specifically that some human minds can predict events 18-30 months in the future. That is wrong. Some people can predict events up to about 5 years into the future before their predictions fade into statistical insignificance. Accuracy seems to be highest for predictions up to one year out, and then they fade in accuracy.[1]

Attacking anti-rationalism: Pinker goes directly after people who argue that rationalism is “a pretext to exert power” that “all statements are trapped in a web of self-reference and collapse into paradox”, or that “it is futile to even try to make the world a more rational place.” He logically points out that “all these positions have a fatal flaw: they refute themselves. The deny that there can be a reason for those very positions.” His logic is simple: all of those arguments depend on a rational reason to believe any of their arguments, and as soon as “their defenders open their mouths to begin their defense, they have lost the argument, because in that very act they are tacitly committed to persuasion – to adducing reasons for which they are about to argue, which, they insist, ought to be accepted by their listeners according to standards of rationality that both accept.”

On this point, Pinker cites philosopher Thomas Nagel as arguing that “subjectivity and relativism regarding logic and reality are incoherent, because ‘one can’t criticize something with nothing’.” Pinker goes on to point out that even unhinged conspiracy theorists and spewers of alternative facts defend their indefensible beliefs and falsehoods with “Why should I believe you?” or “Prove it.” They never respond to reasonable questions or disbelief with “That’s right, there’s no reason to believe me.” or “Yes, I’m lying right now.” Everyone relies on reason, not as a matter of faith in reason, but as a matter of its unavoidable use.

We’re not always rational: Pinker next turns to an apparent reason to distrust reason, namely cognitive psychology, which is subject matter that informs and drives much of B&B’s ideology and advocacy. Among others, he cites Daniel Khaneman’s 2011 book, Thinking, Fast and Slow, with its now famous description of System 1, our fast, powerful and tireless unconsciousness, and System 2, our slow, weak and easily fatigued consciousness. Throughout his book, Pinker raises such concerns about various biases such as the availability heuristic, stereotyping, non-Bayesian thinking, and motivated reasoning, that sometimes lead to (1) false perceptions of reality, facts and truth, and/or (2) flawed reasoning that is applied to our perceptions, true or false:
But as important as these discoveries are, it is a mistake to see them as refuting some Enlightenment tenet that humans are rational actors, or as licensing the fatalistic conclusion that we might as well give up on reasoned persuasion and fight demagoguery with demagoguery. To begin with, no Enlightenment thinker ever claimed that humans were consistently rational. . . . . . What they argued was that we ought to be rational, by learning to repress the fallacies and dogmas that so readily seduce us, and that we can be rational, collectively if not individually, by implementing institutions and adhering to norms that constrain our faculties, including free speech, logical analysis, and empirical testing.

Here, Pinker seems to come close to giving up on the individual, and at least sees a possibility of greater rationality in collectivism and social institutions dedicated to rationality and norms that foster it. The norms he mentions are crucial. The rebuttal from the alternative fact and bogus logic populist and conservative crowd is obvious. It goes something like this: ‘I’ve got my free speech rights, you can’t touch them, and besides, I’m the truth teller and you are the evil liar and corrupter of all that is good, civilized and American.’ Pinker may very well be right that rationality will probably require social institutions and collective actions that render this irrational social- and self-identity no longer worth defending. That is a task for social institutions to play a major role in.

Pinker points to an aspect of evolutionary biology we have to accept and deal with, namely the deep human craving for reasons and explanations of the world. But there’s a catch:
Since the world is the way it is regardless of what people believe about it, there is a strong selection pressure for an ability to develop explanations that are true. Reasoning thus has deep evolutionary roots. . . . . But reality is a mighty selection pressure, so a species that lives by ideas must have evolved with an ability to prefer correct ones. The challenge for us today is to design an informational environment in which that ability that prevails over the ones that leads us into folly.

He makes good points here. All the available research points to a powerful innate need to explain things, even when there isn’t enough information to do that. Unfortunately, we often form beliefs without enough evidence, and we do it all the time. If this explanation is basically true, it clearly reveals the origin of many false beliefs and the flawed logic that generates them. This phenomenon is rampant in politics.



Symbols of cultural allegiance – social identity is a tough nut to crack: Pinker turns to researcher Dan Kahan (Yale, legal scholar), who generated evidence that people often hold false beliefs do so as a signal of cultural allegiance and who they are. The fact that their beliefs are false is often not important enough to abandon them, even when they know the belief is false. This is a matter of a person showing social identity and, in this case liberal, conservative or libertarian tribal affiliation:

A given belief, depending on how it is framed and who endorses it, can become a touchstone . . . . . sacred value, or oath of allegiance to one of these tribes. As Kahan and his collaborators explain: “The principle reason people disagree about climate change science is not that it has been communicated to them in forms they cannot understand. Rather, it is that positions on climate change convey values – communal concerns versus individual self-reliance; prudent self-abnegation versus the heroic pursuit of reward; humility versus ingenuity; harmony with nature versus mastery over it – all that divide them along cultural lines.”

Pinker points out that in one sense, belief in obviously false ideas and truths is rational in a person’s social context. People intuitively know that their opinions, e.g., on climate change, are not going to affect anything, but if they change from climate science denial to acceptance (or vice versa) they do know that can make an enormous difference in their social standing with the tribe. The mind-flipper on a sacred value can be seen as odd at best and at worst, a traitor to the tribe and ostracized. Pinker observes that “Given these payoffs, endorsing a belief that hasn’t passed muster with science and fact-checking isn’t so irrational after all – at least by the criterion of the immediate effects on the believer.”

It’s hard to argue with that logic. Kahan sees this mess as people playing a role in a giant “Tragedy of the Belief Commons: what’s rational for every individual to believe (based on [self-]esteem) can be irrational for the society as a whole to act upon (based on reality).” Researchers call this self-defense phenomenon “expressive rationality” or “identity-protective cognition.” From this observer’s point of view, this explains a great deal about why it is so very hard, usually impossible, to change minds with facts that run counter to self-identity, self-esteem, and political beliefs, morals and values. For climate change deniers, it is a matter of those human factors, not science, and not science-based logic. Instead, it is a matter of self-preserving logic, with or without bad science to back it up. Of course, that ignores people and businesses in climate science denial mode for the money, and self-aggrandizers and blowhards in it for themselves in way or another.

One quick note for completeness. Having education, much expertise and employing conscious reason does not guarantee rationality. It can lead to more ingenious rationalizations instead of a search for truth. That bias is called motivated reasoning. Hence the quote by Ben Franklin above.



Predicting the future: Pinker discusses at length research that is the empirical basis this observer’s favorite source of hope for mankind, Philip Tetlock (U. Pennsylvania, Annenberg Professor, https://www.sas.upenn.edu/tetlock/bio , book review 1 - https://disqus.com/home/discussion/channel-biopoliticsandbionews/book_review_superforecasting/ , book review 2 - https://disqus.com/home/discussion/channel-biopoliticsandbionews/book_review_expert_political_judgment_30/ ). Tetlock’s research is revolutionary. It revealed that people can learn to become more rational if they want to. Among other things, his finding of superforecasters who have an outstanding intuitive ability to predict future events (1) opened new fronts in warfare among nations, (2) intensified competition among businesses, and, at least for this observer, (3) revealed a major part of a pathway to partially rationalize politics. Pinker sets his discussion up and then leverages it like this:

Though examining data from history and social science is a better way of evaluating our ideas than arguing from the imagination, the acid test of empirical rationality is prediction. Science proceeds by testing the predictions of hypotheses . . . . Unfortunately the epistemological standards of common sense – we should credit the people and ideas that make correct predictions, and discount the ones that don’t – are rarely applied to the intelligentsia and the commentariat [the blithering class?]. Always wrong prognosticators [blowhards?] like Paul Ehrlich continue to be canvassed by the press, . . . . . The consequences can be dire: many military and political debacles arose from misplaced confidence in the predictions of experts . . . . . A track record of prediction also ought to inform our appraisal of intellectual systems, including political ideologies. . . . . A rational society should seek the answers by consulting the world rather than assuming the omniscience of a bloc of opinionators who have coalesced around a creed.

Pinker notes that we continue to hear from blowhards with dismal prediction track records because (1) no one is keeping score, and (2) blowhards are expert at couching their predictions in vagueness and hard to pin down generalities. And, on the rare occasions that a blowhard is shown to have been wrong and called out for it, they superb at rationalizing their failure into insignificance, e.g., “I was almost right”, “I was wrong but for the right reasons”, “I would have been right but for that unexpected incident”, “I will be proven right next year”, “That wasn’t what I predicted, you got it wrong”, and etc. Tetlock’s first book, Expert Political Judgment: How good it is? How can we know? dives deep into the amazing ability of experts to deflect their dismal track records into nothingness. Pinker is absolutely right to pound on the dismal failures that experts have been, and mostly still are. The people and nations who have learned from Tetlock and take his research seriously are building competitive advantages over those who ignore him.

Trump is incompetent at picking personnel, unless you like incompetence: It may be of some interest to readers that made it this far, thanks for that, Tetlock’s books cite two well-known people as examples of stunning, above and beyond the normal standard of expert failure: (a) Larry Kudlow, now President Trump’s Director of the National Economic Council, and (b) Michael Flynn, former high-ranking US intelligence officer and, briefly, Trump’s National Security Advisor. Before they came to power under Trump, Tetlock ripped Kudlow and Flynn to pieces as prime examples of America’s unfettered modern blowhardoisie. Trump certainly knows how to ‘pick the best people’, if by that he means the most incompetent – they are among the best at being the worst, especially Kudlow.

The superforecaster mindset: One final consideration deserves to be mentioned. Exactly who or what are these superforecaster people compared to non-superforecasters? Pinker describes it like this, and quoting Tetlock:
The forecasters who did the worst were the ones with Big Ideas – left-wing or right-wing, optimistic or pessimistic – which they held with an inspiring (but misguided) confidence: “As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into their preferred cause-effect templates and treated what did not fit as irrelevant distractions. . . . . As a result, they were unusually confident and likelier to declare things ‘impossible’ or ‘certain’. Committed to their conclusions, they were reluctant to change their minds even when their predictions clearly failed. They would tell us, ‘Just wait.’” Indeed, the very traits that put these experts in the public eye made them the worst at prediction. . . . . Tetlock’s superforecasters were: “pragmatic experts who drew on many analytical tools, with the choice hinging on the particular problem they faced. . . . . When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as ‘however’, ‘but’, ‘although’, and ‘on the other hand’. They talked about probabilities, not certainties. And while no one like to say ‘I was wrong’, these experts more readily admitted it and changed their minds.” . . . . . they are humble about particular beliefs, treating them as ‘hypotheses to be tested, not treasures to be guarded. . . . . .They display what the psychologist Johnathan Baron calls “active open-mindedness”. . . . . Even more important that their temperament is their manner of reasoning. Superforecasters are Bayesian, tacitly using the rule . . . . . on how to update one’s degree of credence [confidence] in a proposition in light of new evidence. . . . . Two other traits distinguish superforecasters from pundits and chimpanzees. The superforecasters believe in the wisdom of crowds, laying their hypotheses on the table for others to criticize or amend and pooling their estimates with those of others. And they have strong opinions on chance and contingency in human history as opposed to necessity and fate. . . . . with the most accurate superforecasters expressing the most vehement rejection of fate and acceptance of chance.

This review will be followed by another discussion that focuses mostly on how Pinker sees a way forward toward more rational politics.

Footnote:
1. A personal misconception dispelled and a point about ideology: Pinker contradicts a belief that this reviewer held and expressed several times here at B&B. Specifically, I believed that researcher Philip Tetlock’s data on the human ability to predict future events faded into statistical insignificance at about 18-30 months in the future. Pinker flatly contradicts that. He asserts that the best minds can see future events with statistical significance before they fade at about 5 years in the future:
Once again, there was plenty of dart throwing [referring to the essentially random guessing that characterizes the spew from nearly all experts, talking heads, politicians, partisans, special interests, pundits, and ideological blowhards], but in both tournaments [rigidly controlled tests of people’s ability to predict the future] the couple [Tetlock and his colleague] could pick out “superforecasters” who performed not just better than chimps and pundits, but better than professional intelligence officers with access to classified information, better than prediction markets, and not too far from the theoretical maximum. How can we explain this apparent clairvoyance? (For a year that is – accuracy declines with distance into the future, and it declines to the level of chance around five years out.)(emphasis added)

B&B orig: 2/8/19

No comments:

Post a Comment