Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Saturday, March 16, 2019

Does Absolute Free Speech Mean Fairness, Objectivity and Impartiality?

Saturday, March 16, 2019

  But it cannot be the duty, because it is not the right, of the state to protect the public against false doctrine. The very purpose of the First Amendment is to foreclose public authority from assuming a guardianship of the public mind through regulating the press, speech, and religion. In this field, every person must be his own watchman for truth, because the forefathers did not trust any government to separate the true from the false for us.” U.S. Supreme Court in Thomas v. Collins, 323 U.S. 516, 545 (1945)

 
Moderator message at the former Political Rhetoric Busters Disqus channel and its reincarnation as a Word Press blog

 Some people advocate absolute free speech or something close to it. Some may even want to remove limits on speech that incites imminent violence, is defamatory, child porn, and/or false advertising. It is the case that allowing all speech, except what can now be punished or proscribed, is tantamount to being fair, objective and impartial? If so, that means that dark free speech[1] is fair, objective and impartial.

 But on the other hand, facts, truths and logic are often bitterly contested. For example, people who deny that global warming is real or caused mostly by human activities disagree about the science, the data and its interpretation. They usually also attack the scientists as liars, incompetent, ignorant of basic science, and/ or enemies of the state. The two sides rely on different, incompatible sets of facts and logic. Minds do not change.

 The Supreme Court made it clear that because judges have no idea of how to separate honest from dishonest speech, the Constitution protects dark free speech as much as honest free speech.

 History, and cognitive and social sciences make it clear that dark speech is more persuasive than honest speech. Evolution hard-wired human brains to respond more strongly to threats and the negative emotions threat elicits. In practice, this means that dark speech is easily made to be stronger than honest speech, e.g., by lying, exaggerating and so forth. For example, President Trump’s claim that there is an emergency along the Mexico border is considered by most people to be a false alarm.[2] Nonetheless, that alarm is persuasive to many people, especially when people crossing the border are falsely portrayed as murdering, raping, pedophile narco terrorists.

  Ban the speaker: The political right often criticizes the left as intolerant of opposing speech. They point to instances where speakers on college campuses are disinvited to speak. The left responds that the speakers are socially damaging in various ways, e.g., they are liars, or they foment unwarranted fear, hate, intolerance, etc.

 In view of his past history of fomenting hate and racism, Australia canceled a visa for Milo Yiannopoulos to visit there. The Guardian reports: “Immigration minister David Coleman said on Saturday that comments about Islam made by Yiannopoulos in the wake of the Christchurch [New Zealand]massacre were ‘appalling and foment hatred and division’ and he would not be allowed in the country.”

 The shooter in the Christchurch New Zealand mass murder was explicit in his ‘manifesto’ that he was murdering to divide people about guns and he used social media to spread his message of racist rage and hate while he slaughtered innocents and showed it online in real time.

 Given history and human biology is it fair, objective and impartial to let people use dark free speech against the public? Or, because the courts have held there is no way to tell truth from lies, (1) allowing dark speech free reign is fair, objective and/or impartial, and (2) that’s the best that inherently flawed humans can do in view of their cognitive limitations?

Footnotes:
1. Dark free speech: lies, deceit, unwarranted emotional manipulation such as fomenting unwarranted fear, hate, anger, intolerance, bigotry or racism, unwarranted opacity to hide relevant facts or truths.

 2. “Numerous polls suggest Trump’s decision was popular among his Republican base. But his decision to use executive authority to fund a wall along the southern border is opposed by a clear majority of the public. That is reflected in six polls taken from early January to early March. By roughly a 2-to-1 margin, Americans oppose Trump’s decision to use emergency powers to build a border wall. That’s a wider margin than the Senate resolution to overturn Trump’s declaration of a national emergency, which passed 59 to 41.”

A pragmatic ideology

Original Biopolitics and Bionews post: September 3, 2106

 Current cognitive and social science of politics strongly suggests that humans generally have a very limited capacity to see unbiased reality or facts and apply unbiased common sense to the reality they think they see. The situation is complicated and multi-faceted. Evolution resulted in a human mental capacity that was at least sufficient for modern humans to survive the early days. Building existing human civilization has been based on about the same mental firepower our modern ancestors had. What evolution conferred was a mind that operates using (i) a high bandwidth unconscious mind or mental processes that can process about 11 million bits of information per second, and (ii) a very low bandwidth conscious mind that can process at most about 45-50 bits per second.



 Although our conscious mind believes it is aware of a great deal and is in control of decision-making and behavior, that perception of reality is more illusion than real. Our unconscious thinking exerts much more control over decision-making and behavior than we are aware of. Our conscious mind plays into the illusion. Unconscious innate biases, personal morals, social identity and political ideology all inject distortions into our perceptions of reality or facts and our application of common sense. Conscious reason acts primarily to rationalize or defend unconscious beliefs and rationales, even when they are wrong. False unconscious beliefs include a widespread fundamental misunderstanding of democracy. Our political thinking and behaviors are usually based on major disconnects with reality. Our unconscious mind is usually moralistic, self-righteous and intolerant. That creates a human social situation where “our righteous minds guarantee that our cooperative groups will always be cursed by moralistic strife.” Based on that description of the human condition, it's reasonable to believe that mostly irrational human politics cannot be made demonstrably more rational. That may or may not be true.

Some evidence that suggests that at least some people can operate with significantly less bias in perceiving reality and conscious reasoning. They are measurably more rational than average. The finding of superforecasters among average people and their mental traits suggests that politics might be partially rationalizable for at least some people, if not societies or nations as a whole. Research observations on how superforecasters improve over time, i.e., predict, get feedback, revise, and then repeat, there is reason to believe that evidence-based politics could be a route to better policy. Although the effort is in its infancy, there is some real-world evidence that cognitive science-based political policy can be simple but very successful. The trick is figuring a way to how to deal with personal morals, self-interest and other unconscious distortion sources that impedes politics based on less biased reality and common sense.



 If it’s possible to rationalize mainstream politics at all, accepting the reality of human cognition and behavior is necessary. There’s no point in denying reality and trying to propose false reality-based solutions. Given that, one needs to accept that (i) politics is fundamentally a matter of personal morals, ideology, and self- or group identity, and (ii) current political, economic, religious and/or philosophical moral sets or ideologies, e.g., liberalism, conservatism, capitalism, socialism, libertarianism, anarchy, etc, are fundamental to what makes people tick in terms of politics. One can argue that since existing ideological or moral frameworks have failed to rationalize politics beyond what it is now, and probably always has been, then a new moral or ideological framework is necessary (although maybe not sufficient).

Since morals are personal and they vary significantly among people, there’s no reason to believe that a set of morals or ideological principles cannot be conceived that could temper or significantly substitute for existing morals such as the care-harm moral foundation that tends to drive liberal perceptions and beliefs, or the loyalty-betrayal and other foundations that drives conservatives.

  How can one rationalize politics?: Why swim upstream if there’s a potential solution to be had by swimming downstream with the cognitive current? Morals or variants thereof that essentially everyone already claims to adhere to (even though science says that’s just not the case) seems like a good place to start. Most people (> 97% ?) of all political ideologies claim that they (i) work with unbiased facts, and (ii) unbiased common sense. And, most people believe that their politics and beliefs best serve the public interest (general welfare or common good). Few or no people say they rely on personally biased facts and common sense or that that’s the best way to do politics, although social science argues that that’s exactly how politics works for most people. 



  Three pragmatic morals: If that’s the case, then a set of three already widely accepted morals or political principles that might operate to rationalize politics to some extent without being rejected out of hand. They are (i) fidelity to less biased facts, and (ii) fidelity to less biased common sense, both of which (iii) are applied in service to the public interest.

  Service to the public interest: Service to the public interest means governance based on identifying a rational, optimum balance between serving public, individual and commercial interests based on an objective, fact- and logic-based analysis of competing policy choices, while (1) being reasonably transparent and responsive to public opinion, (2) protecting and growing the American economy, (3) fostering individual economic and personal growth opportunity, (4) defending personal freedoms and the American standard of living, (5) protecting national security and the environment, (6) increasing transparency, competition and efficiency in commerce when possible, and (7) fostering global peace, stability and prosperity whenever reasonably possible, all of which is constrained by (i) honest, reality-based fiscal sustainability that limits the scope and size of government and regulation to no more than what is needed and (ii) genuine respect for the U.S. constitution and the rule of law with a particular concern for limiting unwarranted legal complexity and ambiguity to limit opportunities to subvert the constitution and the law.

  As explained here, that conception of the public interest is broad. It reflects the reality that politics is a competition for influence and money among competing interests and ideologies, all of whom essentially always claim they want what’s best for the public interest. A broad conception encompasses concepts that fully engage all competing interests, morals and ideologies, e.g., (i) national security defense (a conservative moral or concern), (ii) concern for fostering peace and environmental protection (liberal) and (iii) defense of personal freedom (libertarian). Although broad, that public service conception is meaningfully constrained by the first two pragmatic morals, less biased fact and less biased common sense. For regular “subjective” or non-pragmatic politics, neither of those are powerful constraints on most people’s perceptions of reality or facts or their conscious thinking about politics. That’s not intended as a criticism of people’s approach to or thinking about politics. It’s intended to be a non-judgmental statement of fact based on research evidence: For politics, “. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not [intellectually] equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.” https://uploads.disquscdn.com/images/72344a6b7c17faaffe1763b324dbcbda3aa2425aa88a63f6779a42e00a4bd011.png

 In essence, what a broad conception service to the public interest does is to see it as the concept is bigger than special interests and bigger than everything shown on this map of morals-based politics. https://uploads.disquscdn.com/images/714643f01c9a0338188c3c7a72078ff702c1f6fee2fa198c04e1f82b5e4503cd.png In other words, the public interest is bigger than special interests and personal morals or ideologies.

  Criticisms: Many or most liberals, conservatives, libertarians and others will instantly jump all over this “political ideology” as nonsense. For example, how could such a broad conception of serving the public interest make one iota of difference in how political debate occurs now? That’s a good, reasonable question, the answer to which is already given in the discussion, i.e., fidelity to less biased fact and less biased common sense. The assumption is that in the long run, politics better grounded in reality and reason would make a difference for the better.

Many people who see a threat to their own beliefs and ideologies will reject that as nonsense. They already believe (know) that they employ unbiased fact and logic to politics, although the scientific evidence strongly argues that’s not true. Plenty of other criticisms can be raised. Some libertarians and/or conservatives might claim that this subverts personal freedoms and that the concept pays only lip service to defense of personal freedoms. In other words, this ideology seems at best meaningless or at worst a Trojan horse of some sort, e.g., a smoke screen for socialism, fascism and/or tyranny. From a pragmatic POV, it’s easy to see, understand and anticipate that reaction from people trapped in their standard subjective political ideologies, e.g., liberals, conservatives, libertarians, socialists, etc. What this conception does is it forces everyone and every ideology to (i) defend their policy choices on the basis of a less distorted world view and less biased common sense, and (ii) pay more than self-deluded and/or cynical lip service to serving the public interest. Everyone has to win arguments on less spun merits. For standard ideologues, that makes this brand of “pragmatic politics” an absolute nonstarter. It’s dead on arrival. That’s why politics based on these three political principles may be or actually is a new ideology. Who in their right mind would ever conceive of such a wacky thing? This won’t work for liberals, conservatives, libertarians, socialists or believers in any other existing ideology or set of morals I am aware of. To accept this set of political morals, one has to move away from existing mind sets and accept this proposal for what it is, i.e., advocacy of a cold, harsh competition in a brutal marketplace of less spun ideas and arguments based on less spun facts and realities. Some thought has gone into this. Here are responses to a list of criticisms to this three morals-based political ideology.



  Questions: Does proposing a three morals-based pragmatic political ideology make any sense? Is it too utopian to be a reasonable means to partially rationalize politics? Could it ever appeal to more than just a few people? What has been overlooked in the morals or articulation of the public interest? What’s the fatal flaw(s) in the underlying reality and/or rationale? Do the existence of superforecasters provide a template for social change, or are those people intellectual freaks or flukes that cannot guide widespread social change?

 Is it pointless to even discuss such an approach to politics because people will never allow, or as David Hume argued in the 18th century, are incapable of subjugating standard personal moral foundations (their passions) to facts and logic? Would it matter if many people, say 4-5% of adult Americans, did adopt this pragmatic mind set, e.g., they formed a vocal group or tribe that young people could identify with and begin to adopt in lieu of never having to switch from any existing standard ideology?

Book notes: Superforecasting - The Art And Science Of Prediction

Original Biopolitics and Bionews post: September 1, 2016

  Philip E. Tetlock - Superforecasting: The Art and Science of Prediction, Crown Books, 2015

Notes: System 1 refers to out powerful unconscious thinking and the biases and morals that shape it (Johnathan Haidt's "elephant")
System 2 refers to our weak conscious thinking, "reason" or "common sense", including the unconscious biases that are embedded in it (Haidt's "rider")
  Foxes: People having a relatively open minded mind set (described in Tetlock's first book, Expert Political Judgment: How Good Is it? How Can We know?)
  Hedgehogs: People with a more closed mind set

  Book notes Chapter 1: An optimistic skeptic
p. 3: regarding expert opinions, there is usually no accurate measurement of how good they are, there are “just endless opinions - and opinions on opinions. And that is business as usual.”; the media routinely delivers, or corporations routinely pay for, opinions that may be accurate, worthless or in between and everyone makes decisions on that basis
 p. 5: talking head talent is skill in telling a compelling story, which is sufficient for success; their track record isn’t irrelevant - most of them are about as good as random guessing; predictions are time-sensitive - 1-year predictions tend to beat guessing more than 5- or 10-year projections
 p. 8-10: there are limits on what is predictable → in nonlinear systems, e.g., weather patterns, a small initial condition change can lead to huge effects (chaos theory); we cannot see very far into the future (maybe 18-36 months?)
 p. 13-14: predictability and unpredictability coexist; a false dichotomy is saying the weather is unpredictable - it is usually relatively predictable 1-3 days out, but at days 4-7 accuracy usually declines to near-random; weather forecasters are slowly getting better because they are in an endless forecast-measure-revise loop ("perpetual beta" mode); prediction consumers, e.g., governments, businesses and regular people, don’t demand evidence of accuracy, so it isn’t available, and that means no revision, which means no improvement
 p. 15: Bill Gate’s observation: surprisingly often a clear goal isn’t specified so it is impossible to drive progress toward the goal; that is true in forecasting; some forecasts are meant to (1) entertain, (2) advance a political agenda, or (3) reassure the audience their beliefs are correct and the future will unfold as expected (this kind is popular with political partisans)
 p. 16: the lack of rigor in forecasting is a huge social opportunity; to seize it (i) set the goal of accuracy and (ii) measure success and failure
 p. 18: the Good Judgment Project found two things, (1) foresight is real and some people have it and (2) it isn’t strictly a talent from birth - (i) it boils down to how people think, gather information and update beliefs and (ii) it can be learned and improved
 p. 21: from a 1954 book - analysis of 20 studies showed that algorithms based on objective indicators were better predictors than well-informed experts; more than 200 later studies have confirmed that and the conclusion is simple - if you have a well-validated statistical algorithm, use it
 p. 22: machines may never be able to beat talented humans, so dismissing human judgment as just subjective goes too far; maybe the best that can be done will come from human-machine teams, e.g., Garry Kasparov and Deep Blue together against a machine or a human
 p. 23: quoting David Ferrucci, IBM’s Watson’s chief engineer is optimistic: “‘I think it’s going to get stranger and stranger’ for people to listen to the advice of experts whose views are informed only by their subjective judgment.”; Tetlock: “. . . . we will need to blend computer-based forecasting and subjective judgment in the future. So it’s time to get serious about both.”

  Chapter 2: Illusions of knowledge
p. 25: regarding a medical diagnosis error: “We all been too quick to make up our minds and too slow to change them. And if we don’t examine how we make these mistakes, we will keep making them. This stagnation can go on for years. Or a lifetime. It can even last centuries, as the long and wretched history of medicine illustrates.”

 p. 30: “It was the absence of doubt - and scientific rigor - that made medicine unscientific and caused it to stagnate for so long.”; it was an illusion of knowledge - if the patient died, he was too sick to be saved, but if he got better, the treatment worked - there was no controlled data to support those beliefs; for decades, physicians resisted the idea of randomized, controlled trials as proposed in 1921 because they (falsely) believed their subjective judgments revealed the truth

 p. 35: on Daniel Khaneman’s (Nobel laureate) fast System 1: “A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere.” - context - instantly running away from a Paleolithic shadow that might be a lion; Khaneman calls these tacit assumptions or biases WYSIATI (what-you-see-is-all-there-is); system 1 judgments take less than 1 sec. - there’s no time to think about things; regarding coherence: “. . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”

 p. 38-39: confirmation bias: (i) seeking evidence to support the 1st plausible explanation, (ii) rarely seeking contradictory evidence and (iii) being a motivated skeptic in the face of contrary evidence and finding even weak reasons to denigrate it or reject it entirely, e.g., a doctor’s belief that a quack medical treatment works for all but the incurable

 p. 40: attribute substitution, availability heuristic or bait and switch: one question may be difficult or unanswerable without more info, so the unconscious System 1 substitutes another, easier, question and the easy question’s answer is the same as the hard question’s answer, even when it is wrong; CLIMATE CHANGE EXAMPLE: people who cannot figure out climate change on their own substitute what most climate scientists believe for their own belief - it can be wrong (Me: it can also be right -- how does the non-expert assess technology beyond one's capacity to eavaluate it?)

 p. 41-42: “The instant we wake up and look past the tip of our nose, sights and sounds flow into the brain and System 1 is engaged. This system is subjective, unique to each of us.”; cognition is a matter of blending inputs from System 1 and 2 - in some people, System 1 has more dominance than in others; it is a false dichotomy to see it as System 1 or System 2 operating alone; pattern recognition: System 1 alone can make very good or bad snap judgments and the person may not know why - bad snap judgment or false positive = seeing the Virgin Mary in burnt toast (therefore, slowing down to double check intuitions can help)

 p. 44: tip of the nose perspective is why doctors did not doubt their own beliefs for thousands of years (ME: and that kept medical science mostly in the dark ages until after the end of WWII) https://uploads.disquscdn.com/images/1de3df9f1d005c58643da4b5045c1e668e4a824d04b824d3b6f83eb68d6119e2.jpg

  Chapter 3: Keeping Score
p. 48: it is not unusual that a forecast that may seem dead right or wrong really cannot be “conclusively judged right or wrong”; details of a forecast may be absent and the forecast can’t be scored, e.g., no time frames, geographic locations, reference points, definition of success or failure, definition of terms, a specified probability of events (e.g., 68% chance of X) or lack thereof or many comparison forecasts to assess the predictability of what is being forecasted;
 p. 53: “. . . . vague verbiage is more the rule than the exception.”
 p. 55: security experts were asked what the term “serious possibility” meant in a 1951 National Intelligence Estimate → one analyst said it meant 80 to 20 (4 times more likely than not), another said it meant 20 to 80 and others said it was in between those two extremes → ambiguous language is ~useless, maybe more harmful than helpful
 p. 50-52: national security experts had views split along liberal and conservative lines about the Soviet Union and future relations; they were all wrong and Gorbachev came to power and de-escalated nuclear and war tensions; after the fact, all the experts claimed they could see it coming all along; “But the train of history hit a curve, and as Karl Marx once quipped, the intellectuals fall off.”; the experts were smart and well-informed, but they were just misled by System 1’s subjectivity (tip of the nose perspective)
 p.58-59: the U.S. intelligence community resisted putting definitions and specified probabilities in their forecasts until finally, 10 years after the WMD fiasco with Saddam Hussein, the case for precision was so overwhelming that they changed; “But hopelessly vague language is still so common, particularly in the media, that we rarely notice how vacuous it is. It just slips by.”
p. 60-62: calibration: perfect calibration = X% chance of an event when past forecasts have always been “there is a X% chance” of the event, e.g., rainfall; calibration requires many forecasts for the assessment and is thus impractical for rare events, e.g., presidential elections; underconfidence = prediction is X% chance, but reality is a larger X+Y% chance; overconfidence = prediction is X% chance, but reality is a smaller X-Y% chance
 p. 62-66: the two facets of good judgment are captured by calibration and resolution; resolution: high resolution occurs when predictions of low < ~ 20% or high > ~80% probability events are accurately predicted; accurately predicting rare events gets more weight than accurately predicting more common events; a low Brier score is best, 0.0 is perfect, 0.5 is random guessing and 2.0 is getting all or none, or yes or no, predictions wrong 100% of the time; however a score of 0.2 in one circumstance, e.g., weather prediction in Phoenix, AZ looks bad, while a score of 0.2 in Springfield MO is great because the weather there is far less predictable than in Phoenix; apples-to-apples comparisons are necessary, but that kind of data usually doesn’t exist (Me: society is dismally data-poor)
 p. 68: In Expert Political Judgment (Tetlock's first book), the bottom line was that some experts were marginally better than random guessing - the common characteristic was how they thought, not their ideology, Ph.D. or not, or access to classified information; the typical expert was about as good as random guessing and their thinking was ideological; “They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” when piling up reasons why they were right and others were wrong. As a result, they were confident to declare things “impossible” or “certain.” Committed to their conclusions, they were reluctant to change their minds even when their predictions clearlyfailed. They would tell us, ‘Just wait.’”
 p. 69: “The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. . . . . They talked about possibilities and probabilities, not certainties.”
 p. 69: “The fox knows many things but the hedgehog knows one big thing. . . . . Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t. . . . . How did hedgehogs manage to do slightly worse than random guessing?”; hedgehog example is CNBC’s Larry Kudlow and his supply side economics Big Idea in the face of the 2007 recession
 p. 70-72: on Kudlow: “Think of that Big Idea as a pair of glasses that the hedgehog never takes off. . . . And, they aren’t ordinary glasses. They are green-tinted glasses . . . . Everywhere you look, you see green, whether it’s there or not. . . . . So the hedgehog’s one Big Idea doesn’t improve his foresight. It distorts it.”; more information helps increase hedgehog confidence, not accuracy; “Not that being wrong hurt Kudlow’s career. In January 2009, with the American economy in a crisis worse than any since the Great Depression, Kudlow’s new show, The Kudlow Report, premiered on CNBC. That too is consistent with the Expert Political Judgment data, which revealed an inverse correlation between fame and accuracy: the more famous an expert was, the less accurate he was.”; “As anyone who has done media training knows, the first rule is keep it simple, stupid. . . . . People tend to find uncertainty disturbing and “maybe” underscores uncertainty with a bright red crayon. . . . . The simplicity and confidence of the hedgehog impairs foresight, but it calms nerves - which is good for the careers of hedgehogs. . . . Foxes don’t fare so well in the media. . . . This aggregation of many perspectives is bad TV.”
 p. 73: an individual who does a one-off accurate guess is different from people who do it consistently; consistency is based on aggregation, which is the recognition that useful info is widely dispersed and each bit needs a separate weighting for importance and relevance p 74: on information aggregation: “Aggregating the judgments of people who know nothing produces a lot of nothing.”(Hm - what about Disqus channels that demand that all voices and POVs be heard, informed or not?); the bigger the collective pool of accurate information, the better the prediction or assessment; Foxes tend to aggregate, Hedgehogs don’t
 p. 76-77: aggregation: looking at a problem from one perspective, e.g., pure logic can lead to an incorrect answer; multiple perspectives are needed; using both logic and psycho-logic (psychology or human cognition) helps; some people are lazy and don’t think, some apply logic to some degree and then stop, while others pursue logic to its final conclusion → aggregate all of those inputs to arrive at the best answer; “Foxes aggregate perspectives.” P 77-78: on human cognition - we don’t aggregate perspectives naturally: “The tip-of-your nose perspective insists that it sees reality objectively and correctly, so there is no need to consult other perspectives.”
 p. 79-80: on perspective aggregation: “Stepping outside ourselves and really getting a different view of reality is a struggle. But Foxes are likelier to give it a try.”; people’s temperament fall along a spectrum from the rare pure Foxes to the rare pure Hedgehogs; “And our thinking habits are not immutable. Sometimes they evolve without out awareness of the change. But we can also, with effort, choose to shift gears from one mode to another.”

  Chapter 4: Superforecasters
p. 84-85: the U.S. intelligence community (IC) is, like every huge bureaucracy (about 100,000 people, about $50 billion budget), very change-resistant - they saw and acknowledged their colossal failure to predict the Iranian revolution, but did little or nothing to address their dismal capacity to predict situations and future events; the WMD-Saddam Hussein disaster 22 years later finally inflicted a big enough shock to get the IC to seriously introspect p 88 (book review comment): my Intelligence Advanced Research Projects Agency work isn’t as exotic as Defense Advanced Research Projects Agency, but it can be just as important
 p. 89: humans “will never be able to forecast turning points in the lives of individuals or nations several years into the future - and heroic searches for superforecasters won’t change that.”; the approach: “Quit pretending you know things you don’t and start running experiments.” (ME: the argument for evidence-based politics)
 p. 90-93: the shocker: although the detailed result is classified (it’s gov’t funded IARPA research, Good Judgment Project (GJP: https://www.gjopen.com/) volunteers who passed screening and used simple algorithms but without access to classified information beat government intelligence analysts with access to classified information; one contestant (a retired computer programmer) had a Breier score of 0.22, 5th highest among 2,800 GJP participants and then in a later competition among the best forecasters, his score increased to 0.14, top among the initial group of 2,800 → he beat the commodities futures markets by 40% and the “wisdom of the crowd” control group by 60% (ME: hire this person and get rich)
 p. 94-95: the best forecasters got things right at 300 days out more than regular forecasters looking out 100 days and that improved over the 4-year GJP experiment: “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”
 p. 96: on IARPA’s willingness to critically self-assess after the WMD disaster in Iraq: “And yet, IARPA did just that: it put the intelligence community’s mission ahead of the people inside the intelligence community - at least ahead of those insiders who didn’t want to rock the bureaucratic boat.”
 p. 97-98: “But it’s easy to misinterpret randomness. We don’t have an intuitive feel for it. Randomness is invisible from the tip-of-your-nose perspective. We can see it only if we step outside of ourselves.”; people can be easily tricked into believing that they can predict entirely random outcomes, e.g., guessing coin tosses; “. . . . delusions of this sort are routine. Watch business news on television, where talking heads are often introduced with a reference to one of their forecasting references . . . . And yet many people takes these hollow claims seriously.” (bloviation & blither sells) 
p. 99: “Most things in life involve skill and luck, in varying proportions.”
 p. 99-101: regression to the mean cannot be overlooked and is a necessary tool for testing the role of luck in performance → regression is slow for activities dominated by skill, e.g., forecasting, and fast for activities dominated by chance/ randomness, e.g., coin tossing
 p. 102-103: a key question is how did superforecasters hold up across the years? → in years 2 and 3, superforecasters were the opposite of regressing to the mean -- they got better; sometimes causal connections are nonlinear and thus not predictable and some of that had to be present among the variables that affected what the forecasters were facing → there should be some regression unless an offsetting process is increasing forecasters’ performance; there is some regression - about 30% of superforecasters fall out of the top 2% each year but 70% stay in - individual year-to-year correlation is about 0.65, which is pretty high, i.e., about 1 in 3 → Q: Why are these people so good?

  Chapter 5: Supersmart? p. 114: Fermi-izing questions, breaking a question into relevant parts, can allow better guesses, e.g., how many piano tuners are there in Chicago → guess total population, guess total # pianos & time to tune one piano & hours/year a tuner works → that technique usually helps increase accuracy a lot, even when none of the numbers are known; Fermi-izing tends to defuse the unconscious System 1’s tendency to bait & switch the question; EXAMPLE: would testing of Arafat’s body 6 years after his death reveal the presence of Polonium (Po), which is allegedly what killed him? → Q1 - can you even detect Po 6 years later? Q2: if Po is still detectable, how could it have happened, e.g., Israel, Palestinian enemies before or after his death → for this question the outside view, what % of exhumed bodies are found to be poisoned is hard to (i) identify and (ii) find the answer to, but identifying it is most important, i.e., it’s not certain (< 100%, say 80%), but it has to be more that trivial evidence otherwise authorities would not allow his body to be exhumed (> 20%) → use the 20-80% halfway point of 50% as the outside view, then adjust probability up or down based on research and the inside or intuitive System 1 view → that’s using a blend of unconscious intuition plus conscious reason → personal political ideology has little or nothing to do with it p.118: superforecasters look at questions 1st from Khaneman’s “outside view”, i.e., the statistical or historical base rate or norm (the anchor) and then 2nd use the inside view to adjust probabilities up or down → System 1 generally goes straight to the comfortable but often wrong inside view and ignores the outside view; will there be a Vietnam-China border clash start in the next year -- the 1st (outside) view asks how many clashes there have been over time, e.g., once every 5 years, and then merged in the 2nd view of current Vietnam-China politics to adjust the baseline probability up or down p. 120: the outside view has to come first; “And it’s astonishingly easy to settle on a bad anchor.”; good anchors are easier to find from the outside view than from the inside p. 123-124: some superforecasters kept explaining in the Good Judgment Project online forum how they approached problems, what their thinking was and asking for criticisms, i.e., they were looking for other perspectives; simply asking if a judgment is wrong tends to lead to improvement in the first judgment; “The sophisticated forecaster knows about confirmation bias and will seek out evidence that cuts both ways.” p. 126: “A brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.” p. 127: “For superforecasters, beliefs are hypothesis to be tested, not treasures to be guarded.”

  Chapter 6: Superquants? p. 128-129: most superforecasters are good at math, but mostly they rely on subjective judgment: one super said this: “It’s all, you know, balancing, finding relevant information and deciding how relevant is this really?”; it’s not math skill that counts most - its nuanced subjective judgment p. 138-140: we crave certainty and that’s why Hedgehogs and their confident yes or no answers on TV are far more popular and comforting than Foxes with their discomforting “on the one hand . . . but on the other” style; people equate confidence with competence; “This sort of thinking goes a long way to explaining why so many people have a poor grasp of probability. . . . The deeply counterintuitive nature of statistics explains why even very sophisticated people often make elementary mistakes.” A forecast of a 70% chance of X happening means that there is a 30% chance it won’t - that fact is lost on most people → most people translate an 80% of X to mean X will happen and that just isn’t so; only when probabilities are closer to even, maybe about 65:35 to 34:65; (p. 144), does the translation for most people become “maybe” X will happen, which is the intuitively uncomfortable translation of uncertainty associated with most everything p. 143: superforecasters tend to be probabilistic thinkers, e.g., Treasury secy Robert Rubin; epistemic uncertainty describes something unknown but theoretically knowable, while aleatory uncertainty is both unknown and unknowable p. 145-146: superforecasters who use more granularity, a 20, 21 or 22% chance of X tended to be more accurate than those who used 5% increments and they tended to be more accurate than those who used 10% increments, e.g., 20%, 30% or 40%; when estimates were rounded to the nearest 5% or 10%, the granular best superforecasters fell into line with all the rest, i.e., there was real precision in those more granular 1% increment predictions p. 148-149: “Science doesn’t tackle “why” questions about the purpose of life. It sticks to “how” questions that focus on causation and probabilities.”; “Thus, probabilistic thinking and divine-order thinking are in tension. Like oil and water, chance and fate do not mix. And to the extent we allow our thoughts to move in the direction of fate, we undermine our ability to think probabilistically. Most people tend to prefer fate.” p. 150: the sheer improbability of something that does happen, you meet and marry your spouse, is often attributed to fate or God’s will, not the understanding that sooner or later many/most people get married to someone at some point in their lives; the following psycho-logic is “incoherent”, i.e., not logic: (1) the chance of meeting the love of my life was tiny, (2) it happened anyway, (3) therefore it was meant to be and (4) therefore, the probability it would happen was 100% p. 152: scoring for tendency to accept or reject fate and accept probabilities instead, average Americans are mixed or about 50:50, undergrads somewhat more biased toward probabilities and superforecasters are the most grounded in probabilities, while rejecting fate as an explanation; the more inclined a forecaster is to believe things are destined or fate, the less accurate their forecasts were, while probability-oriented forecasters tended to have the highest accuracy → the correlation was significant

  Chapter 7: Supernewsjunkies? p. 154-155: based on news flowing in, superforecasters tended to update their predictions and that tended to improve accuracy; it sn’t just a matter of following the news and changing output from sufficient new input - their initial forecasts were 50% more accurate than regular forecasters p. 160: belief perseverance = people “rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.” → extreme obstinacy, e.g., the fact that something someone predicted didn’t happen is taken as evidence that it will happen p. 161-163: on underreacting to new information: “Social psychologists have long known that getting people to publicly commit to a belief is a great way to freeze it in place, making it resistant to change. The stronger the commitment, the greater the resistance.”; perceptions are a matter of our “identity”; “. . . . people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other. Psycho-logic trumps logic.”; “. . . . superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.”; consider “career CIA analysts or acclaimed pundits with their reputations on the line.” (my observation: once again, ego rears its ugly head and the output is garbage - check your ego at the door) p. 164: on overreacting to new information: dilution effect = irrelevant or noise information can and often does change perceptions of probability and that leads to mistakes; frequent forecast updates based on small “units of doubt” (small increments) and that seems to tend to minimize overreacting and underreacting; balancing new information with the info that drive the original or earlier updates captures the value of all the information p. 170: Baye’s theorem: new/updated belief/forecast = prior belief x diagnostic value of the new information; most superforecasters intuitively understand Baye’s theorem, but can’t write the equation down nor do they actually use it, instead they use the concept and weigh updates based on the value of new information   

  Chapter 8: Perpetual Beta p. 174-175: two basic mindsets - the growth mindset is that you can learn and grow through hard work; the fixed mindset holds that you have what you were born with and that innate talents can be revealed but not created or developed, e.g., fixed mindsetters say things like, e.g., “I’m bad at math”, and it becomes a self-fulfilling prophecy; fixed mindset children given harder puzzles give up and lose interest, while growth mindset kids loved the challenge because for them, learning was a priority p. 178: consistently inconsistent - John Maynard Keynes: engaged in an endless cycle of try, fail, analyze, adjust, try again; he retired wealthy from his investing, despite massive losses from the great depression and other personal blunders; skills improve with practice p. 181-183: prompt feedback on forecasts is necessary for improvement, but it’s usually lacking - experience alone doesn’t compensate - experienced police gain confidence that they are good at spotting liars, but it isn’t true because they don’t improve with time; most forecasters get little or no feedback because (1) their language is ambiguous and their forecasts are thus not precise enough to evaluate - self-delusion is a real concern and (2) there’s usually a long time lag between a forecast and the time needed to get feedback on success or failure - with time a person forgets the details of their own forecasts and hindsight bias distorts memory, which makes it worse; vague language is elastic and people read into it what they want; hindsight bias = knowing theoutcome of an event and that distorts our perception of what we thought we knew before the outcome; experts succumb to it all the time, e.g., prediction of loss of communist power monopoly in the Soviet Union before it disintegrated in 1991 and after it happened → expert recall was 31% higher than their original estimate (= hindsight bias) p. 190: “Superforecasters are perpetual beta.” - they have the growth mind set p. 191-192: list of superforecaster tendencies Philosophic outlook: Cautious - things are uncertain; Humble - reality is infinitely complex; Nondeterministic - what happens isn’t meant to be and doesn’t have to happen Ability & thinking style: Actively open-minded - beliefs are hypotheses to be tested, not treasures to be protected; Intelligent, knowledgeable & have a need for cognition (conscious thinking) - intellectually curious, like puzzles and challenges Forecasting methods: Pragmatic - not wedded to any idea or agenda; Analytical - can step back from tip-of-nose view and consider other views; Dragonfly-eyed - value diverse views and synthesize them into their own; Probabilistic - judge using many grades or degrees of maybe or chance; Thoughtful updaters - change their minds when facts change; Good intuitive psychologists - aware of the value of checking personal thinking for cognitive and emotional biases Work ethic: Have a growth mindset - believe it’s possible to improve; Have grit - determined to keep at it however long it takes Superforecaster traits vary in importance: perpetual beta mode is important → the degree to which supers value updating and self-improvement (growth mindset) is a predictor 3 times more powerful than the next best predictor, intelligence

  Chapter 9: Superteams p. 201: success can lead to mental habits that undermine the mental habits that led to success in the first place; on the other hand, properly functioning teams can foster dragonfly-eyed perspectives and thinking, which can improve forecasting p. 208-209: givers on teams are not chumps - they tend to make the whole team perform better; it is complex and it will take time to work out the psychology of groups - replicating this won’t be easy in the real world; “diversity trumps ability” may be true due to the different perspectives a team can generate or, maybe it’s a false dichotomy and a shrewd mix of ability and diversity is the key to optimum performance

  Chapter 10: The Leader’s Dilemma p. 229-230: Tetlock uses the German Wehrmacht as an example of how leadership and judgment can be effectively combined, even though it served an evil end → the points being that (i) even evil can operate intelligently and creatively so therefore don’t underestimate your opponent and (ii) seeing something as evil and wanting to learn from it presents no logical contradiction but only a psychological envsion that superforecasters overcome because they will learn from anyone or anything that has information or lessons of value

  Chapter 11: Are They really So Super? p. 232-233: in a 2014 interview Gen. Michael Flynn, Head of DIA (DoD’s equivalent of the CIA; 17,000 employees) said “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” but googling the phrase “global conflict trends” says otherwise; Flynn, like Peggy Noonan and her partisan reading of political events, suffered from the mother of all cognitive illusions, WYSIATI (what-you-see-is-all-there-is) → every day for three hours, Flynn saw nothing but reports of conflicts and bad news; what is important is the fact that Flynn, a highly accomplished and intelligent operative fell for the most obvious illusion there is → even when we know something is a System 1 cognitive illusion, we sometimes cannot shut it off and see unbiased reality, e.g., Müller-Lyer optical illusion (two equal lines, one with arrow ends pointing out and one with ends pointing in - the in-pointing arrow line always looks longer, even when you know it isn’t) p. 234-237: “. . . . dedicated people can inoculate themselves to some degree against certain cognitive illusions.”; scope insensitivity is a major illusion of particular importance to forecasters - it is another bait & switch bias or illusion where a hard question is unconsciously substituted with a simpler question, e.g., the average amount groups of people would be willing to pay to avoid 2,000, 20,000 or 200,000 birds drowning in oil ponds was the same for each group, $80 in increased taxes → the problem’s scope recedes into the background so much that it becomes irrelevant; the scope insensitivity bias or illusion (Tetlock seems to use the terms interchangeably) is directly relevant to geopolitical problems; surprisingly, superforecasters were less influenced by scope insensitivity than average forecasters - scope sensitivity wasn’t perfect, but it was good (better than Khaneman guessed it would be); Tetlock’s guess → superforecasters were skilled and persistent in making System 2 corrections of System 1 judgments, e.g., by stepping into the outside view, which dampens System 1 bias and/or ingrains the technique to the point that it is “second nature” for System 1
 p. 237-238: CRITICISM: how long can superforecasters defy psychological gravity?; maybe a long time - one developed software designed to correct System 1 bias in favor of the like-minded and that helped lighten the heavy cognitive load of forecasting; Nassim Taleb’s Black Swan criticism of all of this is that (i) rare events, and only rare events, change the course of history and (ii) there just aren’t enough occurrences to judge calibration because so few events are both rare and impactful on history; maybe superforecasters can spot a Black Sawn and maybe they can’t - the Good Judgment Project (GJP) wasn’t designed to ask that question
 p. 240-241, 244: REBUTTAL OF CRITICISM: the flow of history flows from both Black Swan events and from incremental changes; if only Black Swans counted, the GJP would be useful only for short-term projections and with limited impact on the flow of events over long time frames; and, if time frames are drawn out to encompass a Black Swan, e.g., the one-day storming of the Bastille on July 14, 1789 vs. that day plus the ensuing 10 years of the French revolution, then such events are not so unpredictable - what’s the definition of a Black Swan?; other than the obvious, e.g., therewill be conflicts, predictions 10 years out are impossible because the system is nonlinear p. 245: “Knowing what we don’t know is better than thinking we know what we don’t.”; “Khaneman and other pioneers of modern psychology have revealed that our minds crave certainty and when they don’t find it, they impose it.”; referring to experts’ revisionist response the unpredicted rise of Gorbachev: “In forecasting, hindsight bias is the cardinal sin.” - hindsight bias not only makes past surprises seem less surprising, it also fosters belief that the future is more predictable than it is

  Chapter 12: What’s Next? p. 251: “On the one hand, the hindsight-tainted analyses that dominate commentary after major events are a dead end. . . . . On the other hand, our expectations of the future are derived from our mental models of how the world works, and every event is an opportunity to learn and improve those models.”; the problems is that “effective learning from experience can’t happen without clear feedback, and you can’t have clear feedback unless your forecasts are unambiguous and scoreable.” p. 252: “Vague expressions about indefinite futures are not helpful. Fuzzy thinking can never be proven wrong. . . . . Forecast, measure, revise: it is the surest path to seeing better.” - if people see that, serious change will begin; “Consumers of forecasting will stop being gulled by pundits with good stories and start asking pundits how their past predictions fared - and reject answers that consist of nothing but anecdotes and credentials. And forecasters will realize . . . . that these higher expectations will ultimately benefit them, because it is only with the clear feedback that comes with rigorous testing that they can improve their foresight.” p. 252-253: “It could be huge - an “evidence-based forecasting” revolution similar to the “evidence-based medicine” revolution, with consequences every bit as significant.”
 p. 253: IS IMPROVEMENT EVEN POSSIBLE?: nothing is certain: “Or nothing may change. . . . . things may go either way.”; whether the future will be the “stagnant status quo” or change “will be decided by the people whom political scientists call the “attentive public. I’m modestly optimistic.” (Question: is this a faint glimmer of hope that politics can be partially rationalized on the scale of individuals, groups, societies, nations and/or the whole human species?) p. 254-256: one can argue that the only goal of forecasts is to be accurate but in practice, there are multiple goals - in politics the key question is - Who does what to whom? - people lie because self and tribe matter and in the mind of a partisan (Dick Morris predicting a Romney landslide victory just before he lost is the example Tetlock used - maybe he lied about lying) lying to defend self or tribe is justified because partisans want to be the ones doing whatever to the whom; “If forecasting can be co-opted to advance their interests, it will be.” - but on the other hand, the medical community resisted efforts to make medicine scientific but over time persistence and effort paid off - entrenched interests simply have to be overcome (another faint glimmer of hope?)
 p. 257: Tetlock's focus: “Evidence-based policy is a movement modeled on evidence-based medicine, with the goal of subjecting government policies to rigorous analysis so that legislators will actually know - not merely think they know - whether policies do what they are supposed to do.”; “. . . . there is plenty of evidence that rigorous analysis has made a real difference in government policy.”; analogies exist in philanthropy (Gates Foundation) and sports - evidence is used to feed success and curtail failure p. 262-263: “What matters is the big question, but the big question can’t be scored.”, so ask a bunch of relevant small questions - it’s like pointillism painting - each dot means little but thousands of dots create a picture; clusters of little questions will be tested to see if that technique can shed light on big questions p. 264-265: elements of good judgment include foresight and moral judgment, which can’t be run through an algorithm; asking the right questions may not be the province of superforecasters - Hedgehogs often seem to come up with the right questions - the two mindsets needed for excellence may be different
 p.266: the Holy Grail of my research: “. . . . using forecasting tournaments to depolarize unnecessarily polarized policy debates and make us collectively smarter.” (Tetlock sees a path forward, but doesn’t aggressively generalize it to all of politics, including the press-media → this is a clear step toward “rational” politics) p. 269: adversarial but constructive collaboration requires good faith; “Sadly, in noisy public arenas, strident voices dominate debates, and they have zero interest in adversarial collaboration. . . . But there are less voluble and more reasonable voices. . . . . let them design clear tests of their beliefs. . . . . When the results run against their beliefs, some will try to rationalize away the facts, but they will pay a reputational price. . . . . All we have to do is get serious about keeping score.”

 GJP-related websites: www.goodjudgement.com https://www.gjopen.com/ 
http://edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii

Book Review: Superforecasting: The Art and Science of Prediction

Original Biopolitics and Bionews post: September 2, 2016

 In Superforecasting: The Art & Science of Prediction, social scientist Philip E. Tetlock and journalist Dan Gardner (Crown Publishers, September 2015) observe that at its heart, political policy is usually about predicting the future. The exercise boils down to finding and implementing policies that will do best for the public interest (general welfare or common good), regardless of how one defines the concept. What most accurately describes the essence of intelligent, objective, public service-oriented politics? Is it primarily an honest competition among the dominant ideologies of our times, defense of one’s social identity, a self-interested quest for money, influence or power, or some combination thereof? Does it boil down to understanding the biological functioning of the human mind and how it sees and thinks about the world? Is it something else entirely? 

 Subject to caveats, Superforecasting comes down on the side of getting brain biology or cognition right. Everything else is subordinate. Superforecasting describes Tetlock's research into asking what factors, if any, can be identified that contribute to a person’s ability to predict the future. Tetlock asks how well intellectually engaged but otherwise non-professional people can do. The performance of volunteers is compared against experts, including professional national security analysts with access to classified information.

  The conscious-unconscious balance: What Tetlock and his team found was that interplay between dominant, unconscious, fact- and common sense-distorting intuitive human cognitive thinking (“System 1” or the “elephant” as described before) and our far less-powerful but conscious, rational thinking (“System 2” or the “rider”) was a key factor in how well people predicted future events. The imbalance of power or bandwidth between conscious thinking and unconsciousness thinking is estimated to be at least 100,000-fold in favor of unconsciousness. The trick to optimal performance appears to be found in people who are able to strike a balance between the two modes of thinking, with the conscious mind constantly self-analyzing to reduce fact distortions and logic biases or flaws that the unconscious mind constantly generates. Tetlock observes that a “defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. . . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.” Coherence can arise even when there's insufficient information. In essence, the human mind evolved an ‘allergy’ to ambiguity, contradictions and concepts that are threatening to personal morals, identity and/or self-interest. To deal with that, we rapidly and unconsciously rationalize those uncomfortable things by denying or distorting them.

It turns out, that with some training and the right mind set, a few people, “superforecasters”, routinely trounce professional experts at predicting future events. Based on a 4-year study, Tetlock’s “Good Judgment Project”, funded by the DoD’s Intelligence Advanced Research Projects Agency, about 2,800 volunteers made over a million predictions on topics that ranged from potential conflicts between countries to currency and commodity, e.g., oil price fluctuations. The predictions had to be precise enough to be analyzed and scored. About 1% of the 2,800 volunteers turned out to be superforecasters who beat national security analysts by about 30% at the end of the first year. One even beat commodities futures markets by 40%.

The superforecaster volunteers did whatever they could to get information, but they nonetheless beat professional analysts who were backed by computers and programmers, spies, spy satellites, drones, informants, databases, newspapers, books and whatever else that professionals with security clearances have access to. As Tetlock put it, “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”

  What makes superforecasters so good?: The top 1-2% of volunteers were analyzed for personal traits. In general, superforecasters tended to be people who were open-minded about collecting information, their world view and opposing opinions. They were also able to step outside of themselves and look at problems from an “outside view.” To do that they searched out and integrated other opinions into their own thinking. Those traits go counter to the standard human tendency to seek out information that confirms what we already know or want to believe. That bias is called confirmation bias.

The open minded trait also tended to reduce unconscious System 1 distortion of problems and potential outcomes by other unconscious cognitive biases such as the powerful but subtle “what you see is all there is” bias, hindsight bias and scope insensitivity, i.e., not giving proper weight to the scope of a problem. Superforecasters tended to break complex questions down into component parts so that relevant factors could be considered separately. That tends to reduce unconscious bias-induced fact and logic distortions. In general, superforecaster susceptibility to unconscious biases was lower than for other volunteers in the GJP. That appeared to be due mostly to their capacity to use conscious (System 2) thinking to recognize and then reduce unconscious (System 1) biases.

Analysis revealed that superforecasters tended to share 15 traits including (i) cautiousness based on an innate knowledge that little or nothing was certain, (ii) being reflective, i.e., introspective and self-critical, (iii) being comfortable with numbers and probabilities, (iv) being pragmatic and not wedded to any particular agenda or ideology, and, most importantly, (v) intelligence, and (vi) being comfortable with (a) updating personal beliefs or opinions and (b) belief in self-improvement (having a growth mindset). Tetlock refers to that mind set as being in “perpeutal beta” mode. 

 Unlike political ideologues, superforecasters tended to be pragmatic, i.e., they generally did not try to “squeeze complex problems into the preferred cause-effect templates [or treat] what did not fit as irrelevant distractions.” Compare that with politicians who promise to govern as proud progressives or patriotic conservatives and the voters who respond to those appeals. What the best forecasters knew about a topic and their political ideology was less important than how they thought about problems, gathered information and then updated thinking and changed their minds based on new information.

The best engaged in an endless process of information and perspective gathering, weighing information relevance and questioning and updating their own judgments when it made sense, i.e., they were in “perpetual beta” mode. Doing that required effort and discipline. Political ideological rigor such as conservatism or liberalism was generally detrimental. Regarding common superforecaster traits, Tetlock observed that “a brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.”

Superforecasters have a real capacity for self-critical thinking. Political, economic and religious ideology is mostly beside the point. Instead, they are actively open-minded, e.g., “beliefs are hypotheses to be tested, not treasures to be protected.” Tetlock asserts that politicians and partisan pundits opining on all sorts of things routinely fall prey to (i) not checking their assumptions against reality, (ii) making predictions that can’t be measured for success or failure, and/or (iii) knowingly lying to advance their agendas. Politicians, partisan pundits and experts are usually wrong because of their blinding ideological rigidity and/or self- or group-interest and the intellectual dishonesty that accompanies those mind sets. Given the nature of political rhetoric that dominates the two-party system and the biology of human cognition, it is reasonable to argue that most of what is said or written about politics is more spin (meaningless rhetoric or lies-deceit) than not.

  Questions: Is Tetlock’s finding of superforecasters real, and if so, does that point to a meaningful (teachable) human potential to at least partially rationalize politics for individuals, groups, societies or nations? Why or why not? After the WMD disaster in Iraq, US Intelligence agencies funded Tetlock’s research and then adopted his techniques to assess intelligence analysts: Will “better” intelligence assessments reduce the frequency or magnitude of mistakes that rigid liberal or conservative politicians tend to make? Would it be any different if the politicians calling the shots are centrists, moderates, socialists, libertarians, anarchists or cognitive science-based, pragmatists focused on problem solving with little or no regard to political ideology or economic theory?

The moral palette of political ideology

Original Biopolitics and Bionews post: August 30, 2016

 In his book, The Righteous Mind, Why Good People Are Divided By Politics And Religion, Johnathan Haidt (pronounced 'height') described the Moral Foundations Theory. The theory is an anthropology-based hypothesis that Haidt and another psychologist, Craig Joseph, developed to explain differences in moral reasoning and beliefs between liberals, conservatives and others. The theory posits that there's more to morality than just harm and fairness. It posits that six moral concepts or foundations shape our beliefs, reason and behaviors in politics and other areas of life. The foundations and their associated intuitions-emotions are (1) harm-care (compassion or lack thereof), (2) fairness-unfairness (anger, gratitude, guilt), (3) loyalty-betrayal (group pride, rage at traitors), (4) authority-subversion (respect, fear), (5) sanctity-degradation (disgust) and (6) liberty-oppression (resentment or hatred at domination).

 The six foundations presumably evolved as response triggers to threats or adaptive challenges our ancestors faced. Modern triggers can differ from what our ancestors faced, e.g., loyalty to a nation or sports team can trigger the loyalty-betrayal moral in some or most people in different ways. Haidt analogizes moral foundations to taste receptors: “. . . . morality is like cuisine: it’s a cultural construction, influenced by accidents of environment and history, but it’s not so flexible that anything goes. . . . . Cuisines vary, but they all must please tongues equipped with the same five taste receptors. Moral matrices vary, but they all must please righteous minds equipped with the same six social receptors.”

 Large surveys led to the observation that in going from a spectrum of people from politically very liberal to moderate to very conservative, the importance of the care and fairness morals decreased in most people, while the loyalty, authority and sanctity morals increased. The harm-care and fairness-unfairness morals significantly shapes liberal thinking and belief, while the loyalty-betrayal authority- subversion and sanctity-degradation morals significantly shapes conservative minds. Haidt observes that the moral palettes of liberals and conservatives are such that you can usually tell one from the other by asking what qualities they would want in their dog or other questions that are intended to elicit a response by a specific moral foundation.* This kind of morals-based thinking and preference appears to significantly shape thinking and belief related to issues in politics.

 * For example, how much would you need to be paid to stick a tiny, harmless sterile hypodermic needle into (i) your own arm, and (ii) the arm of a child you don't know. For people to whom it matters, that question pair triggers the harm-care moral response and the answers generally correlate with the influence of the harm-care moral on a person’s politics and beliefs. 

  Libertarians & the cerebral style: In one large survey study, Haidt examined the moral foundations that libertarians displayed. Haidt's group reported this: “Libertarians are an increasingly prominent ideological group in U.S. politics . . . . Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences.” Iyer R, Koleva S, Graham J, Ditto P, Haidt J (2012) Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians. PLoS ONE 7(8):e42366. doi:10.1371/journal.pone.0042366

 Morals-based politics is another avenue to begin to understand innate, intractable differences between adherents of differing ideologies. What is interesting and important about this study, are the observations that (i) libertarians are an increasingly prominent group, and (ii) “a relatively cerebral as opposed to emotional cognitive style.” Both are evidence that groups of Americans can and do adopt a new political ideology and can apply conscious reason, i.e., “Haidt’s rider” (conscious or cerebral reasoning) to their politics to a measurably higher degree relative to other groups that operate under a more “emotional cognitive style” or cognition more dominated by unconscious intuition (Haidt's elephant).

  Questions: How convincing is the argument that libertarians use a relatively cerebral (conscious reason) style compared to liberals and/or conservatives who are asserted to employ a more “emotional cognitive style” (unconscious intuition) in thinking about politics? Would a more cerebral style necessarily be better? Is the moral foundations theory persuasive or is it still only an academic hypothesis with little real world relevance?

Book review: The Righteous Mind

March 16, 2019

The Righteous Mind: Why Good People are Divided by Politics and Religion, Johnathan Haidt, Pantheon Books 2012 Dr. Haidt is a social psychologist and Professor of Ethical Leadership at NYU’s Stern School of Business. He wrote The Righteous Mind to “at least do what we can to understand why we are so easily divided into hostile groups, each one certain of its righteousness.” He explains: “My goal in this book is to drain some of the heat, anger, and divisiveness out of these topics and replace them with awe, wonder, and curiosity.”

In view of America’s increasing political polarization, Haidt clearly has his work cut out for him. To find answers, Haidt focuses on the inherent moralistic, self-righteous nature of human cognition and thinking about politics and religion. Through the ages, there were three basic conceptions of the roles of reason (~ conscious logic) and passion (unconscious intuition, emotion) in human thinking and behavior. Plato (~428-348 BC) argued that reason dominated in intellectual elites called “philosophers”, but that average people were mostly controlled by their passions. David Hume (1711-1776) argued that reason or conscious thinking was nothing more than a slave to human passions. Thomas Jefferson (1743-1826) argued that reason and passions were about equal in their influence.

According to Haidt, the debate is over: “Hume was right. The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant. . . . . intuitions come first, strategic reasoning second. Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first.”

Our intuitive (unconscious) morals and judgments tend to be more subjective, personal and emotional than objective and rational (conscious). Haidt points out that we are designed by evolution to be “narrowly moralistic and intolerant.” That leads to self-righteousness and the associated hostility and distrust of other points of views that the trait generates. Regarding the divisiveness of politics, Haidt asserts that “our righteous minds guarantee that our cooperative groups will always be cursed by moralistic strife.”

Our unconscious “moral intuitions (i.e., judgments) arise automatically and almost instantaneously, long before moral reasoning has a chance to get started, and those first intuitions tend to drive our later reasoning.” Initial intuitions driving later reasoning exemplifies some of our many unconscious cognitive biases, e.g., ideologically-based motivated reasoning, which distorts both facts we become aware of and the common sense we apply to the reality we think we see.

The book’s central metaphor “is that the mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant. The rider is our conscious reasoning—the stream of words and images of which we are fully aware. The elephant is the other 99 percent of mental processes—the ones that occur outside of awareness but that actually govern most of our behavior.”

Haidt observes that there are two different sets of morals and rhetorical styles that tend to characterize liberals and conservatives: “Republicans understand moral psychology. Democrats don’t. Republicans have long understood that the elephant is in charge of political behavior, not the rider, and they know how elephants work. Their slogans, political commercials and speeches go straight for the gut . . . . Republicans don’t just aim to cause fear, as some Democrats charge. They trigger the full range of intuitions described by Moral Foundations Theory.”

The problem: On reading The Righteous Mind, the depth and breadth of problem for politics becomes uncomfortably clear for anyone hoping to ever find a way to rationalize politics. Haidt sums it up nicely: “Western philosophy has been worshiping reason and distrusting the passions for thousands of years. . . . I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. Morality binds and blinds. The true believers produce pious fantasies that don’t match reality, and at some point somebody comes along to knock the idol off its pedestal. . . . . We do moral reasoning not to reconstruct why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment. . . . . The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. . . . . We make our first judgments rapidly, and we are dreadful at seeking out evidence that might disconfirm those initial judgments.” 

In other words, conscious reason (the rider) serves unconscious intuition and that’s the powerful but intolerant and moralistic beast that Haidt calls the elephant.

Two additional observations merit mention. First, Haidt points out that “traits can be innate without being hardwired or universal. The brain is like a book, the first draft of which is written by the genes during fetal development. No chapters are complete at birth . . . . But not a single chapter . . . . consists of blank pages on which a society can inscribe any conceivable set of words. . . . Nature provides a first draft, which experience then revises. . . . . ‘Built-in’ does not mean unmalleable; it means organized in advance of experience.”

Second, Haidt asserts that Hume “went too far” by arguing that reason is merely a “slave” of the passions. He argues that although intuition dominates, it is “neither dumb nor despotic” and it “can be shaped by reasoning.” He likens the situation as one of a lawyer (the rider) and a client (the elephant). Sometimes the lawyer can talk the client out of doing something dumb, sometimes not. The elephant may be a big, powerful beast, but it’s not stupid and it can learn. Haidt’s assertion that we “will always be cursed by moralistic strife” is his personal moral judgment that our intuitive, righteous nature is a curse, not a blessing or a source of wisdom. In this regard, his instinct is closer to Plato’s moral judgment about how things ought to be than Hume or Jefferson. Or, at least that’s how I read it.
Questions: Does Haidt’s portrayal of the interplay between unconscious intuition and morals and conscious reason or common sense seem reasonable? Are human societies forever doomed (or blessed with), for better or worse, to rely on the moralistic, unconscious processes that have characterized politics since humans invented it thousands of years ago? Does Haidt’s vision of human cognition reasonably accord with the vision that Norretranders portrayed in his book, The User Illusion?

Is it possible that Jefferson was closer to the mark than Hume, and if not, could that be possible in a society that largely operates under a set of morals or political principles that are explicitly designed to tip the balance of power from the elephant to the rider? Can anyone ever rise to the level of one of Plato’s enlightened philosophers, and if so, is that a good thing or not?

Original Biopolitics and Bionews post: August 29, 2016; DP posts: 3/16/19, 4/9/20