Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Saturday, March 16, 2019

Book notes: Superforecasting - The Art And Science Of Prediction

Original Biopolitics and Bionews post: September 1, 2016

  Philip E. Tetlock - Superforecasting: The Art and Science of Prediction, Crown Books, 2015

Notes: System 1 refers to out powerful unconscious thinking and the biases and morals that shape it (Johnathan Haidt's "elephant")
System 2 refers to our weak conscious thinking, "reason" or "common sense", including the unconscious biases that are embedded in it (Haidt's "rider")
  Foxes: People having a relatively open minded mind set (described in Tetlock's first book, Expert Political Judgment: How Good Is it? How Can We know?)
  Hedgehogs: People with a more closed mind set

  Book notes Chapter 1: An optimistic skeptic
p. 3: regarding expert opinions, there is usually no accurate measurement of how good they are, there are “just endless opinions - and opinions on opinions. And that is business as usual.”; the media routinely delivers, or corporations routinely pay for, opinions that may be accurate, worthless or in between and everyone makes decisions on that basis
 p. 5: talking head talent is skill in telling a compelling story, which is sufficient for success; their track record isn’t irrelevant - most of them are about as good as random guessing; predictions are time-sensitive - 1-year predictions tend to beat guessing more than 5- or 10-year projections
 p. 8-10: there are limits on what is predictable → in nonlinear systems, e.g., weather patterns, a small initial condition change can lead to huge effects (chaos theory); we cannot see very far into the future (maybe 18-36 months?)
 p. 13-14: predictability and unpredictability coexist; a false dichotomy is saying the weather is unpredictable - it is usually relatively predictable 1-3 days out, but at days 4-7 accuracy usually declines to near-random; weather forecasters are slowly getting better because they are in an endless forecast-measure-revise loop ("perpetual beta" mode); prediction consumers, e.g., governments, businesses and regular people, don’t demand evidence of accuracy, so it isn’t available, and that means no revision, which means no improvement
 p. 15: Bill Gate’s observation: surprisingly often a clear goal isn’t specified so it is impossible to drive progress toward the goal; that is true in forecasting; some forecasts are meant to (1) entertain, (2) advance a political agenda, or (3) reassure the audience their beliefs are correct and the future will unfold as expected (this kind is popular with political partisans)
 p. 16: the lack of rigor in forecasting is a huge social opportunity; to seize it (i) set the goal of accuracy and (ii) measure success and failure
 p. 18: the Good Judgment Project found two things, (1) foresight is real and some people have it and (2) it isn’t strictly a talent from birth - (i) it boils down to how people think, gather information and update beliefs and (ii) it can be learned and improved
 p. 21: from a 1954 book - analysis of 20 studies showed that algorithms based on objective indicators were better predictors than well-informed experts; more than 200 later studies have confirmed that and the conclusion is simple - if you have a well-validated statistical algorithm, use it
 p. 22: machines may never be able to beat talented humans, so dismissing human judgment as just subjective goes too far; maybe the best that can be done will come from human-machine teams, e.g., Garry Kasparov and Deep Blue together against a machine or a human
 p. 23: quoting David Ferrucci, IBM’s Watson’s chief engineer is optimistic: “‘I think it’s going to get stranger and stranger’ for people to listen to the advice of experts whose views are informed only by their subjective judgment.”; Tetlock: “. . . . we will need to blend computer-based forecasting and subjective judgment in the future. So it’s time to get serious about both.”

  Chapter 2: Illusions of knowledge
p. 25: regarding a medical diagnosis error: “We all been too quick to make up our minds and too slow to change them. And if we don’t examine how we make these mistakes, we will keep making them. This stagnation can go on for years. Or a lifetime. It can even last centuries, as the long and wretched history of medicine illustrates.”

 p. 30: “It was the absence of doubt - and scientific rigor - that made medicine unscientific and caused it to stagnate for so long.”; it was an illusion of knowledge - if the patient died, he was too sick to be saved, but if he got better, the treatment worked - there was no controlled data to support those beliefs; for decades, physicians resisted the idea of randomized, controlled trials as proposed in 1921 because they (falsely) believed their subjective judgments revealed the truth

 p. 35: on Daniel Khaneman’s (Nobel laureate) fast System 1: “A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere.” - context - instantly running away from a Paleolithic shadow that might be a lion; Khaneman calls these tacit assumptions or biases WYSIATI (what-you-see-is-all-there-is); system 1 judgments take less than 1 sec. - there’s no time to think about things; regarding coherence: “. . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”

 p. 38-39: confirmation bias: (i) seeking evidence to support the 1st plausible explanation, (ii) rarely seeking contradictory evidence and (iii) being a motivated skeptic in the face of contrary evidence and finding even weak reasons to denigrate it or reject it entirely, e.g., a doctor’s belief that a quack medical treatment works for all but the incurable

 p. 40: attribute substitution, availability heuristic or bait and switch: one question may be difficult or unanswerable without more info, so the unconscious System 1 substitutes another, easier, question and the easy question’s answer is the same as the hard question’s answer, even when it is wrong; CLIMATE CHANGE EXAMPLE: people who cannot figure out climate change on their own substitute what most climate scientists believe for their own belief - it can be wrong (Me: it can also be right -- how does the non-expert assess technology beyond one's capacity to eavaluate it?)

 p. 41-42: “The instant we wake up and look past the tip of our nose, sights and sounds flow into the brain and System 1 is engaged. This system is subjective, unique to each of us.”; cognition is a matter of blending inputs from System 1 and 2 - in some people, System 1 has more dominance than in others; it is a false dichotomy to see it as System 1 or System 2 operating alone; pattern recognition: System 1 alone can make very good or bad snap judgments and the person may not know why - bad snap judgment or false positive = seeing the Virgin Mary in burnt toast (therefore, slowing down to double check intuitions can help)

 p. 44: tip of the nose perspective is why doctors did not doubt their own beliefs for thousands of years (ME: and that kept medical science mostly in the dark ages until after the end of WWII) https://uploads.disquscdn.com/images/1de3df9f1d005c58643da4b5045c1e668e4a824d04b824d3b6f83eb68d6119e2.jpg

  Chapter 3: Keeping Score
p. 48: it is not unusual that a forecast that may seem dead right or wrong really cannot be “conclusively judged right or wrong”; details of a forecast may be absent and the forecast can’t be scored, e.g., no time frames, geographic locations, reference points, definition of success or failure, definition of terms, a specified probability of events (e.g., 68% chance of X) or lack thereof or many comparison forecasts to assess the predictability of what is being forecasted;
 p. 53: “. . . . vague verbiage is more the rule than the exception.”
 p. 55: security experts were asked what the term “serious possibility” meant in a 1951 National Intelligence Estimate → one analyst said it meant 80 to 20 (4 times more likely than not), another said it meant 20 to 80 and others said it was in between those two extremes → ambiguous language is ~useless, maybe more harmful than helpful
 p. 50-52: national security experts had views split along liberal and conservative lines about the Soviet Union and future relations; they were all wrong and Gorbachev came to power and de-escalated nuclear and war tensions; after the fact, all the experts claimed they could see it coming all along; “But the train of history hit a curve, and as Karl Marx once quipped, the intellectuals fall off.”; the experts were smart and well-informed, but they were just misled by System 1’s subjectivity (tip of the nose perspective)
 p.58-59: the U.S. intelligence community resisted putting definitions and specified probabilities in their forecasts until finally, 10 years after the WMD fiasco with Saddam Hussein, the case for precision was so overwhelming that they changed; “But hopelessly vague language is still so common, particularly in the media, that we rarely notice how vacuous it is. It just slips by.”
p. 60-62: calibration: perfect calibration = X% chance of an event when past forecasts have always been “there is a X% chance” of the event, e.g., rainfall; calibration requires many forecasts for the assessment and is thus impractical for rare events, e.g., presidential elections; underconfidence = prediction is X% chance, but reality is a larger X+Y% chance; overconfidence = prediction is X% chance, but reality is a smaller X-Y% chance
 p. 62-66: the two facets of good judgment are captured by calibration and resolution; resolution: high resolution occurs when predictions of low < ~ 20% or high > ~80% probability events are accurately predicted; accurately predicting rare events gets more weight than accurately predicting more common events; a low Brier score is best, 0.0 is perfect, 0.5 is random guessing and 2.0 is getting all or none, or yes or no, predictions wrong 100% of the time; however a score of 0.2 in one circumstance, e.g., weather prediction in Phoenix, AZ looks bad, while a score of 0.2 in Springfield MO is great because the weather there is far less predictable than in Phoenix; apples-to-apples comparisons are necessary, but that kind of data usually doesn’t exist (Me: society is dismally data-poor)
 p. 68: In Expert Political Judgment (Tetlock's first book), the bottom line was that some experts were marginally better than random guessing - the common characteristic was how they thought, not their ideology, Ph.D. or not, or access to classified information; the typical expert was about as good as random guessing and their thinking was ideological; “They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” when piling up reasons why they were right and others were wrong. As a result, they were confident to declare things “impossible” or “certain.” Committed to their conclusions, they were reluctant to change their minds even when their predictions clearlyfailed. They would tell us, ‘Just wait.’”
 p. 69: “The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. . . . . They talked about possibilities and probabilities, not certainties.”
 p. 69: “The fox knows many things but the hedgehog knows one big thing. . . . . Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t. . . . . How did hedgehogs manage to do slightly worse than random guessing?”; hedgehog example is CNBC’s Larry Kudlow and his supply side economics Big Idea in the face of the 2007 recession
 p. 70-72: on Kudlow: “Think of that Big Idea as a pair of glasses that the hedgehog never takes off. . . . And, they aren’t ordinary glasses. They are green-tinted glasses . . . . Everywhere you look, you see green, whether it’s there or not. . . . . So the hedgehog’s one Big Idea doesn’t improve his foresight. It distorts it.”; more information helps increase hedgehog confidence, not accuracy; “Not that being wrong hurt Kudlow’s career. In January 2009, with the American economy in a crisis worse than any since the Great Depression, Kudlow’s new show, The Kudlow Report, premiered on CNBC. That too is consistent with the Expert Political Judgment data, which revealed an inverse correlation between fame and accuracy: the more famous an expert was, the less accurate he was.”; “As anyone who has done media training knows, the first rule is keep it simple, stupid. . . . . People tend to find uncertainty disturbing and “maybe” underscores uncertainty with a bright red crayon. . . . . The simplicity and confidence of the hedgehog impairs foresight, but it calms nerves - which is good for the careers of hedgehogs. . . . Foxes don’t fare so well in the media. . . . This aggregation of many perspectives is bad TV.”
 p. 73: an individual who does a one-off accurate guess is different from people who do it consistently; consistency is based on aggregation, which is the recognition that useful info is widely dispersed and each bit needs a separate weighting for importance and relevance p 74: on information aggregation: “Aggregating the judgments of people who know nothing produces a lot of nothing.”(Hm - what about Disqus channels that demand that all voices and POVs be heard, informed or not?); the bigger the collective pool of accurate information, the better the prediction or assessment; Foxes tend to aggregate, Hedgehogs don’t
 p. 76-77: aggregation: looking at a problem from one perspective, e.g., pure logic can lead to an incorrect answer; multiple perspectives are needed; using both logic and psycho-logic (psychology or human cognition) helps; some people are lazy and don’t think, some apply logic to some degree and then stop, while others pursue logic to its final conclusion → aggregate all of those inputs to arrive at the best answer; “Foxes aggregate perspectives.” P 77-78: on human cognition - we don’t aggregate perspectives naturally: “The tip-of-your nose perspective insists that it sees reality objectively and correctly, so there is no need to consult other perspectives.”
 p. 79-80: on perspective aggregation: “Stepping outside ourselves and really getting a different view of reality is a struggle. But Foxes are likelier to give it a try.”; people’s temperament fall along a spectrum from the rare pure Foxes to the rare pure Hedgehogs; “And our thinking habits are not immutable. Sometimes they evolve without out awareness of the change. But we can also, with effort, choose to shift gears from one mode to another.”

  Chapter 4: Superforecasters
p. 84-85: the U.S. intelligence community (IC) is, like every huge bureaucracy (about 100,000 people, about $50 billion budget), very change-resistant - they saw and acknowledged their colossal failure to predict the Iranian revolution, but did little or nothing to address their dismal capacity to predict situations and future events; the WMD-Saddam Hussein disaster 22 years later finally inflicted a big enough shock to get the IC to seriously introspect p 88 (book review comment): my Intelligence Advanced Research Projects Agency work isn’t as exotic as Defense Advanced Research Projects Agency, but it can be just as important
 p. 89: humans “will never be able to forecast turning points in the lives of individuals or nations several years into the future - and heroic searches for superforecasters won’t change that.”; the approach: “Quit pretending you know things you don’t and start running experiments.” (ME: the argument for evidence-based politics)
 p. 90-93: the shocker: although the detailed result is classified (it’s gov’t funded IARPA research, Good Judgment Project (GJP: https://www.gjopen.com/) volunteers who passed screening and used simple algorithms but without access to classified information beat government intelligence analysts with access to classified information; one contestant (a retired computer programmer) had a Breier score of 0.22, 5th highest among 2,800 GJP participants and then in a later competition among the best forecasters, his score increased to 0.14, top among the initial group of 2,800 → he beat the commodities futures markets by 40% and the “wisdom of the crowd” control group by 60% (ME: hire this person and get rich)
 p. 94-95: the best forecasters got things right at 300 days out more than regular forecasters looking out 100 days and that improved over the 4-year GJP experiment: “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”
 p. 96: on IARPA’s willingness to critically self-assess after the WMD disaster in Iraq: “And yet, IARPA did just that: it put the intelligence community’s mission ahead of the people inside the intelligence community - at least ahead of those insiders who didn’t want to rock the bureaucratic boat.”
 p. 97-98: “But it’s easy to misinterpret randomness. We don’t have an intuitive feel for it. Randomness is invisible from the tip-of-your-nose perspective. We can see it only if we step outside of ourselves.”; people can be easily tricked into believing that they can predict entirely random outcomes, e.g., guessing coin tosses; “. . . . delusions of this sort are routine. Watch business news on television, where talking heads are often introduced with a reference to one of their forecasting references . . . . And yet many people takes these hollow claims seriously.” (bloviation & blither sells) 
p. 99: “Most things in life involve skill and luck, in varying proportions.”
 p. 99-101: regression to the mean cannot be overlooked and is a necessary tool for testing the role of luck in performance → regression is slow for activities dominated by skill, e.g., forecasting, and fast for activities dominated by chance/ randomness, e.g., coin tossing
 p. 102-103: a key question is how did superforecasters hold up across the years? → in years 2 and 3, superforecasters were the opposite of regressing to the mean -- they got better; sometimes causal connections are nonlinear and thus not predictable and some of that had to be present among the variables that affected what the forecasters were facing → there should be some regression unless an offsetting process is increasing forecasters’ performance; there is some regression - about 30% of superforecasters fall out of the top 2% each year but 70% stay in - individual year-to-year correlation is about 0.65, which is pretty high, i.e., about 1 in 3 → Q: Why are these people so good?

  Chapter 5: Supersmart? p. 114: Fermi-izing questions, breaking a question into relevant parts, can allow better guesses, e.g., how many piano tuners are there in Chicago → guess total population, guess total # pianos & time to tune one piano & hours/year a tuner works → that technique usually helps increase accuracy a lot, even when none of the numbers are known; Fermi-izing tends to defuse the unconscious System 1’s tendency to bait & switch the question; EXAMPLE: would testing of Arafat’s body 6 years after his death reveal the presence of Polonium (Po), which is allegedly what killed him? → Q1 - can you even detect Po 6 years later? Q2: if Po is still detectable, how could it have happened, e.g., Israel, Palestinian enemies before or after his death → for this question the outside view, what % of exhumed bodies are found to be poisoned is hard to (i) identify and (ii) find the answer to, but identifying it is most important, i.e., it’s not certain (< 100%, say 80%), but it has to be more that trivial evidence otherwise authorities would not allow his body to be exhumed (> 20%) → use the 20-80% halfway point of 50% as the outside view, then adjust probability up or down based on research and the inside or intuitive System 1 view → that’s using a blend of unconscious intuition plus conscious reason → personal political ideology has little or nothing to do with it p.118: superforecasters look at questions 1st from Khaneman’s “outside view”, i.e., the statistical or historical base rate or norm (the anchor) and then 2nd use the inside view to adjust probabilities up or down → System 1 generally goes straight to the comfortable but often wrong inside view and ignores the outside view; will there be a Vietnam-China border clash start in the next year -- the 1st (outside) view asks how many clashes there have been over time, e.g., once every 5 years, and then merged in the 2nd view of current Vietnam-China politics to adjust the baseline probability up or down p. 120: the outside view has to come first; “And it’s astonishingly easy to settle on a bad anchor.”; good anchors are easier to find from the outside view than from the inside p. 123-124: some superforecasters kept explaining in the Good Judgment Project online forum how they approached problems, what their thinking was and asking for criticisms, i.e., they were looking for other perspectives; simply asking if a judgment is wrong tends to lead to improvement in the first judgment; “The sophisticated forecaster knows about confirmation bias and will seek out evidence that cuts both ways.” p. 126: “A brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.” p. 127: “For superforecasters, beliefs are hypothesis to be tested, not treasures to be guarded.”

  Chapter 6: Superquants? p. 128-129: most superforecasters are good at math, but mostly they rely on subjective judgment: one super said this: “It’s all, you know, balancing, finding relevant information and deciding how relevant is this really?”; it’s not math skill that counts most - its nuanced subjective judgment p. 138-140: we crave certainty and that’s why Hedgehogs and their confident yes or no answers on TV are far more popular and comforting than Foxes with their discomforting “on the one hand . . . but on the other” style; people equate confidence with competence; “This sort of thinking goes a long way to explaining why so many people have a poor grasp of probability. . . . The deeply counterintuitive nature of statistics explains why even very sophisticated people often make elementary mistakes.” A forecast of a 70% chance of X happening means that there is a 30% chance it won’t - that fact is lost on most people → most people translate an 80% of X to mean X will happen and that just isn’t so; only when probabilities are closer to even, maybe about 65:35 to 34:65; (p. 144), does the translation for most people become “maybe” X will happen, which is the intuitively uncomfortable translation of uncertainty associated with most everything p. 143: superforecasters tend to be probabilistic thinkers, e.g., Treasury secy Robert Rubin; epistemic uncertainty describes something unknown but theoretically knowable, while aleatory uncertainty is both unknown and unknowable p. 145-146: superforecasters who use more granularity, a 20, 21 or 22% chance of X tended to be more accurate than those who used 5% increments and they tended to be more accurate than those who used 10% increments, e.g., 20%, 30% or 40%; when estimates were rounded to the nearest 5% or 10%, the granular best superforecasters fell into line with all the rest, i.e., there was real precision in those more granular 1% increment predictions p. 148-149: “Science doesn’t tackle “why” questions about the purpose of life. It sticks to “how” questions that focus on causation and probabilities.”; “Thus, probabilistic thinking and divine-order thinking are in tension. Like oil and water, chance and fate do not mix. And to the extent we allow our thoughts to move in the direction of fate, we undermine our ability to think probabilistically. Most people tend to prefer fate.” p. 150: the sheer improbability of something that does happen, you meet and marry your spouse, is often attributed to fate or God’s will, not the understanding that sooner or later many/most people get married to someone at some point in their lives; the following psycho-logic is “incoherent”, i.e., not logic: (1) the chance of meeting the love of my life was tiny, (2) it happened anyway, (3) therefore it was meant to be and (4) therefore, the probability it would happen was 100% p. 152: scoring for tendency to accept or reject fate and accept probabilities instead, average Americans are mixed or about 50:50, undergrads somewhat more biased toward probabilities and superforecasters are the most grounded in probabilities, while rejecting fate as an explanation; the more inclined a forecaster is to believe things are destined or fate, the less accurate their forecasts were, while probability-oriented forecasters tended to have the highest accuracy → the correlation was significant

  Chapter 7: Supernewsjunkies? p. 154-155: based on news flowing in, superforecasters tended to update their predictions and that tended to improve accuracy; it sn’t just a matter of following the news and changing output from sufficient new input - their initial forecasts were 50% more accurate than regular forecasters p. 160: belief perseverance = people “rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.” → extreme obstinacy, e.g., the fact that something someone predicted didn’t happen is taken as evidence that it will happen p. 161-163: on underreacting to new information: “Social psychologists have long known that getting people to publicly commit to a belief is a great way to freeze it in place, making it resistant to change. The stronger the commitment, the greater the resistance.”; perceptions are a matter of our “identity”; “. . . . people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other. Psycho-logic trumps logic.”; “. . . . superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.”; consider “career CIA analysts or acclaimed pundits with their reputations on the line.” (my observation: once again, ego rears its ugly head and the output is garbage - check your ego at the door) p. 164: on overreacting to new information: dilution effect = irrelevant or noise information can and often does change perceptions of probability and that leads to mistakes; frequent forecast updates based on small “units of doubt” (small increments) and that seems to tend to minimize overreacting and underreacting; balancing new information with the info that drive the original or earlier updates captures the value of all the information p. 170: Baye’s theorem: new/updated belief/forecast = prior belief x diagnostic value of the new information; most superforecasters intuitively understand Baye’s theorem, but can’t write the equation down nor do they actually use it, instead they use the concept and weigh updates based on the value of new information   

  Chapter 8: Perpetual Beta p. 174-175: two basic mindsets - the growth mindset is that you can learn and grow through hard work; the fixed mindset holds that you have what you were born with and that innate talents can be revealed but not created or developed, e.g., fixed mindsetters say things like, e.g., “I’m bad at math”, and it becomes a self-fulfilling prophecy; fixed mindset children given harder puzzles give up and lose interest, while growth mindset kids loved the challenge because for them, learning was a priority p. 178: consistently inconsistent - John Maynard Keynes: engaged in an endless cycle of try, fail, analyze, adjust, try again; he retired wealthy from his investing, despite massive losses from the great depression and other personal blunders; skills improve with practice p. 181-183: prompt feedback on forecasts is necessary for improvement, but it’s usually lacking - experience alone doesn’t compensate - experienced police gain confidence that they are good at spotting liars, but it isn’t true because they don’t improve with time; most forecasters get little or no feedback because (1) their language is ambiguous and their forecasts are thus not precise enough to evaluate - self-delusion is a real concern and (2) there’s usually a long time lag between a forecast and the time needed to get feedback on success or failure - with time a person forgets the details of their own forecasts and hindsight bias distorts memory, which makes it worse; vague language is elastic and people read into it what they want; hindsight bias = knowing theoutcome of an event and that distorts our perception of what we thought we knew before the outcome; experts succumb to it all the time, e.g., prediction of loss of communist power monopoly in the Soviet Union before it disintegrated in 1991 and after it happened → expert recall was 31% higher than their original estimate (= hindsight bias) p. 190: “Superforecasters are perpetual beta.” - they have the growth mind set p. 191-192: list of superforecaster tendencies Philosophic outlook: Cautious - things are uncertain; Humble - reality is infinitely complex; Nondeterministic - what happens isn’t meant to be and doesn’t have to happen Ability & thinking style: Actively open-minded - beliefs are hypotheses to be tested, not treasures to be protected; Intelligent, knowledgeable & have a need for cognition (conscious thinking) - intellectually curious, like puzzles and challenges Forecasting methods: Pragmatic - not wedded to any idea or agenda; Analytical - can step back from tip-of-nose view and consider other views; Dragonfly-eyed - value diverse views and synthesize them into their own; Probabilistic - judge using many grades or degrees of maybe or chance; Thoughtful updaters - change their minds when facts change; Good intuitive psychologists - aware of the value of checking personal thinking for cognitive and emotional biases Work ethic: Have a growth mindset - believe it’s possible to improve; Have grit - determined to keep at it however long it takes Superforecaster traits vary in importance: perpetual beta mode is important → the degree to which supers value updating and self-improvement (growth mindset) is a predictor 3 times more powerful than the next best predictor, intelligence

  Chapter 9: Superteams p. 201: success can lead to mental habits that undermine the mental habits that led to success in the first place; on the other hand, properly functioning teams can foster dragonfly-eyed perspectives and thinking, which can improve forecasting p. 208-209: givers on teams are not chumps - they tend to make the whole team perform better; it is complex and it will take time to work out the psychology of groups - replicating this won’t be easy in the real world; “diversity trumps ability” may be true due to the different perspectives a team can generate or, maybe it’s a false dichotomy and a shrewd mix of ability and diversity is the key to optimum performance

  Chapter 10: The Leader’s Dilemma p. 229-230: Tetlock uses the German Wehrmacht as an example of how leadership and judgment can be effectively combined, even though it served an evil end → the points being that (i) even evil can operate intelligently and creatively so therefore don’t underestimate your opponent and (ii) seeing something as evil and wanting to learn from it presents no logical contradiction but only a psychological envsion that superforecasters overcome because they will learn from anyone or anything that has information or lessons of value

  Chapter 11: Are They really So Super? p. 232-233: in a 2014 interview Gen. Michael Flynn, Head of DIA (DoD’s equivalent of the CIA; 17,000 employees) said “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” but googling the phrase “global conflict trends” says otherwise; Flynn, like Peggy Noonan and her partisan reading of political events, suffered from the mother of all cognitive illusions, WYSIATI (what-you-see-is-all-there-is) → every day for three hours, Flynn saw nothing but reports of conflicts and bad news; what is important is the fact that Flynn, a highly accomplished and intelligent operative fell for the most obvious illusion there is → even when we know something is a System 1 cognitive illusion, we sometimes cannot shut it off and see unbiased reality, e.g., Müller-Lyer optical illusion (two equal lines, one with arrow ends pointing out and one with ends pointing in - the in-pointing arrow line always looks longer, even when you know it isn’t) p. 234-237: “. . . . dedicated people can inoculate themselves to some degree against certain cognitive illusions.”; scope insensitivity is a major illusion of particular importance to forecasters - it is another bait & switch bias or illusion where a hard question is unconsciously substituted with a simpler question, e.g., the average amount groups of people would be willing to pay to avoid 2,000, 20,000 or 200,000 birds drowning in oil ponds was the same for each group, $80 in increased taxes → the problem’s scope recedes into the background so much that it becomes irrelevant; the scope insensitivity bias or illusion (Tetlock seems to use the terms interchangeably) is directly relevant to geopolitical problems; surprisingly, superforecasters were less influenced by scope insensitivity than average forecasters - scope sensitivity wasn’t perfect, but it was good (better than Khaneman guessed it would be); Tetlock’s guess → superforecasters were skilled and persistent in making System 2 corrections of System 1 judgments, e.g., by stepping into the outside view, which dampens System 1 bias and/or ingrains the technique to the point that it is “second nature” for System 1
 p. 237-238: CRITICISM: how long can superforecasters defy psychological gravity?; maybe a long time - one developed software designed to correct System 1 bias in favor of the like-minded and that helped lighten the heavy cognitive load of forecasting; Nassim Taleb’s Black Swan criticism of all of this is that (i) rare events, and only rare events, change the course of history and (ii) there just aren’t enough occurrences to judge calibration because so few events are both rare and impactful on history; maybe superforecasters can spot a Black Sawn and maybe they can’t - the Good Judgment Project (GJP) wasn’t designed to ask that question
 p. 240-241, 244: REBUTTAL OF CRITICISM: the flow of history flows from both Black Swan events and from incremental changes; if only Black Swans counted, the GJP would be useful only for short-term projections and with limited impact on the flow of events over long time frames; and, if time frames are drawn out to encompass a Black Swan, e.g., the one-day storming of the Bastille on July 14, 1789 vs. that day plus the ensuing 10 years of the French revolution, then such events are not so unpredictable - what’s the definition of a Black Swan?; other than the obvious, e.g., therewill be conflicts, predictions 10 years out are impossible because the system is nonlinear p. 245: “Knowing what we don’t know is better than thinking we know what we don’t.”; “Khaneman and other pioneers of modern psychology have revealed that our minds crave certainty and when they don’t find it, they impose it.”; referring to experts’ revisionist response the unpredicted rise of Gorbachev: “In forecasting, hindsight bias is the cardinal sin.” - hindsight bias not only makes past surprises seem less surprising, it also fosters belief that the future is more predictable than it is

  Chapter 12: What’s Next? p. 251: “On the one hand, the hindsight-tainted analyses that dominate commentary after major events are a dead end. . . . . On the other hand, our expectations of the future are derived from our mental models of how the world works, and every event is an opportunity to learn and improve those models.”; the problems is that “effective learning from experience can’t happen without clear feedback, and you can’t have clear feedback unless your forecasts are unambiguous and scoreable.” p. 252: “Vague expressions about indefinite futures are not helpful. Fuzzy thinking can never be proven wrong. . . . . Forecast, measure, revise: it is the surest path to seeing better.” - if people see that, serious change will begin; “Consumers of forecasting will stop being gulled by pundits with good stories and start asking pundits how their past predictions fared - and reject answers that consist of nothing but anecdotes and credentials. And forecasters will realize . . . . that these higher expectations will ultimately benefit them, because it is only with the clear feedback that comes with rigorous testing that they can improve their foresight.” p. 252-253: “It could be huge - an “evidence-based forecasting” revolution similar to the “evidence-based medicine” revolution, with consequences every bit as significant.”
 p. 253: IS IMPROVEMENT EVEN POSSIBLE?: nothing is certain: “Or nothing may change. . . . . things may go either way.”; whether the future will be the “stagnant status quo” or change “will be decided by the people whom political scientists call the “attentive public. I’m modestly optimistic.” (Question: is this a faint glimmer of hope that politics can be partially rationalized on the scale of individuals, groups, societies, nations and/or the whole human species?) p. 254-256: one can argue that the only goal of forecasts is to be accurate but in practice, there are multiple goals - in politics the key question is - Who does what to whom? - people lie because self and tribe matter and in the mind of a partisan (Dick Morris predicting a Romney landslide victory just before he lost is the example Tetlock used - maybe he lied about lying) lying to defend self or tribe is justified because partisans want to be the ones doing whatever to the whom; “If forecasting can be co-opted to advance their interests, it will be.” - but on the other hand, the medical community resisted efforts to make medicine scientific but over time persistence and effort paid off - entrenched interests simply have to be overcome (another faint glimmer of hope?)
 p. 257: Tetlock's focus: “Evidence-based policy is a movement modeled on evidence-based medicine, with the goal of subjecting government policies to rigorous analysis so that legislators will actually know - not merely think they know - whether policies do what they are supposed to do.”; “. . . . there is plenty of evidence that rigorous analysis has made a real difference in government policy.”; analogies exist in philanthropy (Gates Foundation) and sports - evidence is used to feed success and curtail failure p. 262-263: “What matters is the big question, but the big question can’t be scored.”, so ask a bunch of relevant small questions - it’s like pointillism painting - each dot means little but thousands of dots create a picture; clusters of little questions will be tested to see if that technique can shed light on big questions p. 264-265: elements of good judgment include foresight and moral judgment, which can’t be run through an algorithm; asking the right questions may not be the province of superforecasters - Hedgehogs often seem to come up with the right questions - the two mindsets needed for excellence may be different
 p.266: the Holy Grail of my research: “. . . . using forecasting tournaments to depolarize unnecessarily polarized policy debates and make us collectively smarter.” (Tetlock sees a path forward, but doesn’t aggressively generalize it to all of politics, including the press-media → this is a clear step toward “rational” politics) p. 269: adversarial but constructive collaboration requires good faith; “Sadly, in noisy public arenas, strident voices dominate debates, and they have zero interest in adversarial collaboration. . . . But there are less voluble and more reasonable voices. . . . . let them design clear tests of their beliefs. . . . . When the results run against their beliefs, some will try to rationalize away the facts, but they will pay a reputational price. . . . . All we have to do is get serious about keeping score.”

 GJP-related websites: www.goodjudgement.com https://www.gjopen.com/ 
http://edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii

Book Review: Superforecasting: The Art and Science of Prediction

Original Biopolitics and Bionews post: September 2, 2016

 In Superforecasting: The Art & Science of Prediction, social scientist Philip E. Tetlock and journalist Dan Gardner (Crown Publishers, September 2015) observe that at its heart, political policy is usually about predicting the future. The exercise boils down to finding and implementing policies that will do best for the public interest (general welfare or common good), regardless of how one defines the concept. What most accurately describes the essence of intelligent, objective, public service-oriented politics? Is it primarily an honest competition among the dominant ideologies of our times, defense of one’s social identity, a self-interested quest for money, influence or power, or some combination thereof? Does it boil down to understanding the biological functioning of the human mind and how it sees and thinks about the world? Is it something else entirely? 

 Subject to caveats, Superforecasting comes down on the side of getting brain biology or cognition right. Everything else is subordinate. Superforecasting describes Tetlock's research into asking what factors, if any, can be identified that contribute to a person’s ability to predict the future. Tetlock asks how well intellectually engaged but otherwise non-professional people can do. The performance of volunteers is compared against experts, including professional national security analysts with access to classified information.

  The conscious-unconscious balance: What Tetlock and his team found was that interplay between dominant, unconscious, fact- and common sense-distorting intuitive human cognitive thinking (“System 1” or the “elephant” as described before) and our far less-powerful but conscious, rational thinking (“System 2” or the “rider”) was a key factor in how well people predicted future events. The imbalance of power or bandwidth between conscious thinking and unconsciousness thinking is estimated to be at least 100,000-fold in favor of unconsciousness. The trick to optimal performance appears to be found in people who are able to strike a balance between the two modes of thinking, with the conscious mind constantly self-analyzing to reduce fact distortions and logic biases or flaws that the unconscious mind constantly generates. Tetlock observes that a “defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. . . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.” Coherence can arise even when there's insufficient information. In essence, the human mind evolved an ‘allergy’ to ambiguity, contradictions and concepts that are threatening to personal morals, identity and/or self-interest. To deal with that, we rapidly and unconsciously rationalize those uncomfortable things by denying or distorting them.

It turns out, that with some training and the right mind set, a few people, “superforecasters”, routinely trounce professional experts at predicting future events. Based on a 4-year study, Tetlock’s “Good Judgment Project”, funded by the DoD’s Intelligence Advanced Research Projects Agency, about 2,800 volunteers made over a million predictions on topics that ranged from potential conflicts between countries to currency and commodity, e.g., oil price fluctuations. The predictions had to be precise enough to be analyzed and scored. About 1% of the 2,800 volunteers turned out to be superforecasters who beat national security analysts by about 30% at the end of the first year. One even beat commodities futures markets by 40%.

The superforecaster volunteers did whatever they could to get information, but they nonetheless beat professional analysts who were backed by computers and programmers, spies, spy satellites, drones, informants, databases, newspapers, books and whatever else that professionals with security clearances have access to. As Tetlock put it, “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”

  What makes superforecasters so good?: The top 1-2% of volunteers were analyzed for personal traits. In general, superforecasters tended to be people who were open-minded about collecting information, their world view and opposing opinions. They were also able to step outside of themselves and look at problems from an “outside view.” To do that they searched out and integrated other opinions into their own thinking. Those traits go counter to the standard human tendency to seek out information that confirms what we already know or want to believe. That bias is called confirmation bias.

The open minded trait also tended to reduce unconscious System 1 distortion of problems and potential outcomes by other unconscious cognitive biases such as the powerful but subtle “what you see is all there is” bias, hindsight bias and scope insensitivity, i.e., not giving proper weight to the scope of a problem. Superforecasters tended to break complex questions down into component parts so that relevant factors could be considered separately. That tends to reduce unconscious bias-induced fact and logic distortions. In general, superforecaster susceptibility to unconscious biases was lower than for other volunteers in the GJP. That appeared to be due mostly to their capacity to use conscious (System 2) thinking to recognize and then reduce unconscious (System 1) biases.

Analysis revealed that superforecasters tended to share 15 traits including (i) cautiousness based on an innate knowledge that little or nothing was certain, (ii) being reflective, i.e., introspective and self-critical, (iii) being comfortable with numbers and probabilities, (iv) being pragmatic and not wedded to any particular agenda or ideology, and, most importantly, (v) intelligence, and (vi) being comfortable with (a) updating personal beliefs or opinions and (b) belief in self-improvement (having a growth mindset). Tetlock refers to that mind set as being in “perpeutal beta” mode. 

 Unlike political ideologues, superforecasters tended to be pragmatic, i.e., they generally did not try to “squeeze complex problems into the preferred cause-effect templates [or treat] what did not fit as irrelevant distractions.” Compare that with politicians who promise to govern as proud progressives or patriotic conservatives and the voters who respond to those appeals. What the best forecasters knew about a topic and their political ideology was less important than how they thought about problems, gathered information and then updated thinking and changed their minds based on new information.

The best engaged in an endless process of information and perspective gathering, weighing information relevance and questioning and updating their own judgments when it made sense, i.e., they were in “perpetual beta” mode. Doing that required effort and discipline. Political ideological rigor such as conservatism or liberalism was generally detrimental. Regarding common superforecaster traits, Tetlock observed that “a brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.”

Superforecasters have a real capacity for self-critical thinking. Political, economic and religious ideology is mostly beside the point. Instead, they are actively open-minded, e.g., “beliefs are hypotheses to be tested, not treasures to be protected.” Tetlock asserts that politicians and partisan pundits opining on all sorts of things routinely fall prey to (i) not checking their assumptions against reality, (ii) making predictions that can’t be measured for success or failure, and/or (iii) knowingly lying to advance their agendas. Politicians, partisan pundits and experts are usually wrong because of their blinding ideological rigidity and/or self- or group-interest and the intellectual dishonesty that accompanies those mind sets. Given the nature of political rhetoric that dominates the two-party system and the biology of human cognition, it is reasonable to argue that most of what is said or written about politics is more spin (meaningless rhetoric or lies-deceit) than not.

  Questions: Is Tetlock’s finding of superforecasters real, and if so, does that point to a meaningful (teachable) human potential to at least partially rationalize politics for individuals, groups, societies or nations? Why or why not? After the WMD disaster in Iraq, US Intelligence agencies funded Tetlock’s research and then adopted his techniques to assess intelligence analysts: Will “better” intelligence assessments reduce the frequency or magnitude of mistakes that rigid liberal or conservative politicians tend to make? Would it be any different if the politicians calling the shots are centrists, moderates, socialists, libertarians, anarchists or cognitive science-based, pragmatists focused on problem solving with little or no regard to political ideology or economic theory?

The moral palette of political ideology

Original Biopolitics and Bionews post: August 30, 2016

 In his book, The Righteous Mind, Why Good People Are Divided By Politics And Religion, Johnathan Haidt (pronounced 'height') described the Moral Foundations Theory. The theory is an anthropology-based hypothesis that Haidt and another psychologist, Craig Joseph, developed to explain differences in moral reasoning and beliefs between liberals, conservatives and others. The theory posits that there's more to morality than just harm and fairness. It posits that six moral concepts or foundations shape our beliefs, reason and behaviors in politics and other areas of life. The foundations and their associated intuitions-emotions are (1) harm-care (compassion or lack thereof), (2) fairness-unfairness (anger, gratitude, guilt), (3) loyalty-betrayal (group pride, rage at traitors), (4) authority-subversion (respect, fear), (5) sanctity-degradation (disgust) and (6) liberty-oppression (resentment or hatred at domination).

 The six foundations presumably evolved as response triggers to threats or adaptive challenges our ancestors faced. Modern triggers can differ from what our ancestors faced, e.g., loyalty to a nation or sports team can trigger the loyalty-betrayal moral in some or most people in different ways. Haidt analogizes moral foundations to taste receptors: “. . . . morality is like cuisine: it’s a cultural construction, influenced by accidents of environment and history, but it’s not so flexible that anything goes. . . . . Cuisines vary, but they all must please tongues equipped with the same five taste receptors. Moral matrices vary, but they all must please righteous minds equipped with the same six social receptors.”

 Large surveys led to the observation that in going from a spectrum of people from politically very liberal to moderate to very conservative, the importance of the care and fairness morals decreased in most people, while the loyalty, authority and sanctity morals increased. The harm-care and fairness-unfairness morals significantly shapes liberal thinking and belief, while the loyalty-betrayal authority- subversion and sanctity-degradation morals significantly shapes conservative minds. Haidt observes that the moral palettes of liberals and conservatives are such that you can usually tell one from the other by asking what qualities they would want in their dog or other questions that are intended to elicit a response by a specific moral foundation.* This kind of morals-based thinking and preference appears to significantly shape thinking and belief related to issues in politics.

 * For example, how much would you need to be paid to stick a tiny, harmless sterile hypodermic needle into (i) your own arm, and (ii) the arm of a child you don't know. For people to whom it matters, that question pair triggers the harm-care moral response and the answers generally correlate with the influence of the harm-care moral on a person’s politics and beliefs. 

  Libertarians & the cerebral style: In one large survey study, Haidt examined the moral foundations that libertarians displayed. Haidt's group reported this: “Libertarians are an increasingly prominent ideological group in U.S. politics . . . . Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences.” Iyer R, Koleva S, Graham J, Ditto P, Haidt J (2012) Understanding Libertarian Morality: The Psychological Dispositions of Self-Identified Libertarians. PLoS ONE 7(8):e42366. doi:10.1371/journal.pone.0042366

 Morals-based politics is another avenue to begin to understand innate, intractable differences between adherents of differing ideologies. What is interesting and important about this study, are the observations that (i) libertarians are an increasingly prominent group, and (ii) “a relatively cerebral as opposed to emotional cognitive style.” Both are evidence that groups of Americans can and do adopt a new political ideology and can apply conscious reason, i.e., “Haidt’s rider” (conscious or cerebral reasoning) to their politics to a measurably higher degree relative to other groups that operate under a more “emotional cognitive style” or cognition more dominated by unconscious intuition (Haidt's elephant).

  Questions: How convincing is the argument that libertarians use a relatively cerebral (conscious reason) style compared to liberals and/or conservatives who are asserted to employ a more “emotional cognitive style” (unconscious intuition) in thinking about politics? Would a more cerebral style necessarily be better? Is the moral foundations theory persuasive or is it still only an academic hypothesis with little real world relevance?

Book review: The Righteous Mind

March 16, 2019

The Righteous Mind: Why Good People are Divided by Politics and Religion, Johnathan Haidt, Pantheon Books 2012 Dr. Haidt is a social psychologist and Professor of Ethical Leadership at NYU’s Stern School of Business. He wrote The Righteous Mind to “at least do what we can to understand why we are so easily divided into hostile groups, each one certain of its righteousness.” He explains: “My goal in this book is to drain some of the heat, anger, and divisiveness out of these topics and replace them with awe, wonder, and curiosity.”

In view of America’s increasing political polarization, Haidt clearly has his work cut out for him. To find answers, Haidt focuses on the inherent moralistic, self-righteous nature of human cognition and thinking about politics and religion. Through the ages, there were three basic conceptions of the roles of reason (~ conscious logic) and passion (unconscious intuition, emotion) in human thinking and behavior. Plato (~428-348 BC) argued that reason dominated in intellectual elites called “philosophers”, but that average people were mostly controlled by their passions. David Hume (1711-1776) argued that reason or conscious thinking was nothing more than a slave to human passions. Thomas Jefferson (1743-1826) argued that reason and passions were about equal in their influence.

According to Haidt, the debate is over: “Hume was right. The mind is divided into parts, like a rider (controlled processes) on an elephant (automatic processes). The rider evolved to serve the elephant. . . . . intuitions come first, strategic reasoning second. Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first.”

Our intuitive (unconscious) morals and judgments tend to be more subjective, personal and emotional than objective and rational (conscious). Haidt points out that we are designed by evolution to be “narrowly moralistic and intolerant.” That leads to self-righteousness and the associated hostility and distrust of other points of views that the trait generates. Regarding the divisiveness of politics, Haidt asserts that “our righteous minds guarantee that our cooperative groups will always be cursed by moralistic strife.”

Our unconscious “moral intuitions (i.e., judgments) arise automatically and almost instantaneously, long before moral reasoning has a chance to get started, and those first intuitions tend to drive our later reasoning.” Initial intuitions driving later reasoning exemplifies some of our many unconscious cognitive biases, e.g., ideologically-based motivated reasoning, which distorts both facts we become aware of and the common sense we apply to the reality we think we see.

The book’s central metaphor “is that the mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant. The rider is our conscious reasoning—the stream of words and images of which we are fully aware. The elephant is the other 99 percent of mental processes—the ones that occur outside of awareness but that actually govern most of our behavior.”

Haidt observes that there are two different sets of morals and rhetorical styles that tend to characterize liberals and conservatives: “Republicans understand moral psychology. Democrats don’t. Republicans have long understood that the elephant is in charge of political behavior, not the rider, and they know how elephants work. Their slogans, political commercials and speeches go straight for the gut . . . . Republicans don’t just aim to cause fear, as some Democrats charge. They trigger the full range of intuitions described by Moral Foundations Theory.”

The problem: On reading The Righteous Mind, the depth and breadth of problem for politics becomes uncomfortably clear for anyone hoping to ever find a way to rationalize politics. Haidt sums it up nicely: “Western philosophy has been worshiping reason and distrusting the passions for thousands of years. . . . I’ll refer to this worshipful attitude throughout this book as the rationalist delusion. I call it a delusion because when a group of people make something sacred, the members of the cult lose the ability to think clearly about it. Morality binds and blinds. The true believers produce pious fantasies that don’t match reality, and at some point somebody comes along to knock the idol off its pedestal. . . . . We do moral reasoning not to reconstruct why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment. . . . . The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. . . . . We make our first judgments rapidly, and we are dreadful at seeking out evidence that might disconfirm those initial judgments.” 

In other words, conscious reason (the rider) serves unconscious intuition and that’s the powerful but intolerant and moralistic beast that Haidt calls the elephant.

Two additional observations merit mention. First, Haidt points out that “traits can be innate without being hardwired or universal. The brain is like a book, the first draft of which is written by the genes during fetal development. No chapters are complete at birth . . . . But not a single chapter . . . . consists of blank pages on which a society can inscribe any conceivable set of words. . . . Nature provides a first draft, which experience then revises. . . . . ‘Built-in’ does not mean unmalleable; it means organized in advance of experience.”

Second, Haidt asserts that Hume “went too far” by arguing that reason is merely a “slave” of the passions. He argues that although intuition dominates, it is “neither dumb nor despotic” and it “can be shaped by reasoning.” He likens the situation as one of a lawyer (the rider) and a client (the elephant). Sometimes the lawyer can talk the client out of doing something dumb, sometimes not. The elephant may be a big, powerful beast, but it’s not stupid and it can learn. Haidt’s assertion that we “will always be cursed by moralistic strife” is his personal moral judgment that our intuitive, righteous nature is a curse, not a blessing or a source of wisdom. In this regard, his instinct is closer to Plato’s moral judgment about how things ought to be than Hume or Jefferson. Or, at least that’s how I read it.
Questions: Does Haidt’s portrayal of the interplay between unconscious intuition and morals and conscious reason or common sense seem reasonable? Are human societies forever doomed (or blessed with), for better or worse, to rely on the moralistic, unconscious processes that have characterized politics since humans invented it thousands of years ago? Does Haidt’s vision of human cognition reasonably accord with the vision that Norretranders portrayed in his book, The User Illusion?

Is it possible that Jefferson was closer to the mark than Hume, and if not, could that be possible in a society that largely operates under a set of morals or political principles that are explicitly designed to tip the balance of power from the elephant to the rider? Can anyone ever rise to the level of one of Plato’s enlightened philosophers, and if so, is that a good thing or not?

Original Biopolitics and Bionews post: August 29, 2016; DP posts: 3/16/19, 4/9/20

When political rhetoric and debate becomes meaningless

The Washing Post reports that many military veterans are favor Donald Trump, in large part because they see the Iraq war as a failure and a tragic waste of lives and money. Compared to a 10% overall lead for Clinton, two recent polls show a lead of 11% and 14% for Trump among military voters. 

Former marine sergeant Evan McAllister feels that way. The WP reoprts that “the war he fought was a harebrained mission planned by Republicans, rubber-stamped by Democrats and, in the end, lost to al-Qaeda’s brutal successor. The foreign policy establishment of both parties got his friends killed for no reason, he said, so come Election Day, he is voting for the man he believes answers to neither Democrats nor Republicans: Donald Trump. ‘Most veterans . . . they see their country lost to the corrupt. And Trump comes along all of a sudden and calls out the corrupt on both sides of the aisle.’” 

According to another former marine, “‘I think there’s a pretty sour taste in a lot of guys’ mouths about Iraq and about what happened there,” said Jim Webb Jr., a Marine veteran, Trump supporter, son of former U.S. senator Jim Webb (D-Va.) and one of McAllister’s platoon mates. ‘You pour time and effort and blood into something, and you see it pissed away, and you think, ‘How did I spend my twenties?’” Those are good reasons to support Donald Trump, right? After all, Hillary Clinton rubber stamped the Iraq war while in the US Senate and one can reasonably argue it was a failure. Or, are they such good reasons?

Mendacity, deceit & misinformation: According to the fact checkers, there's false information coming from the mouths of both presidential candidates. There's nothing unusual about that. It's all constitutionally protected free speech. Nonetheless, it doesn't hurt to keep the situation in mind. Here's PolitiFacts's profile of Clinton and Trump:
Hillary Clinton's profile


Donald Trump's profile


Of course that's just how one source sees it. At least some if not most Trump supporters vehemently deny the data and accuse PolitiFact of routine anti-conservative and anti-Trump bias. In the minds of those Trump supporters PolitiFact data is simply false and therefore meaningless or even proof of Trump's honesty. Another fact checking source, FactCheck.org also finds a lot of false information coming from the two candidates, with Clinton maybe doing better than Trump in terms of honest rhetoric. Fact checking of Trump's repeated claims that he opposed the Iraq war before it started shows the claims are false, although Trump apparently began expressing reservations about it some months after it started. FactCheck reports a financial incentive, an impending junk bond sale, for Trump to oppose the Iraq war. Uncertainty that new wars tend to create make new financing more complicated or difficult.

Do facts matter? Regardless of what the facts may be, does it really matter? Social and cognitive science research argues that (i) elections do not produce responsive governments, (ii) social or group identity, not facts, is the most important driver of perceptions of reality, belief and behavior for voters, and (iii) and the unconscious human mind is a powerful distorter of fact and conscious thinking or reason. Humans are expert at distorting or denying and rationalizing away inconvenient truths. We fully believe our own rationalizations. The human desire to believe what we want is powerful and unconscious. We believe we are rational and base our beliefs on solid facts when the evidence is usually to the contrary. So, when a candidate says something that provokes intense criticism and then dismisses it as "sarcasm" or a joke, that candidate's followers accept it, while the opponent's supporters do not.

When fact checkers assert a candidate has lied, the candidate's supporters tend to reject that, or accept it but downplay or distort its importance. Of course, that assumes there is an objective basis on which to assess the importance of a lie when evaluating a candidate's suitability for office. Under the circumstances, one can reasonably argue that most political rhetoric is mostly meaningless. If that's not a believable proposition, consider how very little credibility (i) most Trump supporters accord almost anything that Clinton says, and (ii) most Clinton supporters accord almost anything that Trump says. If fact checkers do provide some objective data, it appears to have little or no influence on at least strong supporters of either candidate.

Question: Is most political campaign rhetoric meaningless?

What the unconscious mind thinks of interracial marriage

The human mind operates simultaneously on two fundamentally different tracts, unconscious thinking and conscious thinking. Recent estimates accord unconscious thinking with about 95-99.9% of human mental bandwidth, decision-making influence or "firepower." The rest is our conscious thinking. Conscious and unconscious thinking or decisions can be in conflict. The split human reaction to interracial marriage is a case in point.

A recent Washington Post article describes brain responses to photos of same-race and different race married heterosexual couples. The article is based on data published in a recent paper in the Journal of Experimental Social Psychology "Yuck, you disgust me! Affective bias against interracial couples" Brain scans of people who claim to have no disapproval of or bias against interracial couples show disgust or disapproval of photos that show interracial couples (black and white), but not photos of black couples or white couples. According to the WP article, "Researchers found that the insula, a part of the brain that registers disgust, was highly active when participants viewed the photos of the interracial couples, but was not highly engaged when viewers saw the images of same-race couples, whether they were white or black." This shows the possibility of disconnects between what the unconscious mind sees, thinks and decides, i.e., disgust toward interracial couples, and what the conscious mind sees, thinks and decides, i.e., acceptance of interracial couples. 

What may be unusual about this difference of opinion is that, at least for young people (college student volunteers in this case), the conscious mind dictates personal belief and behavior toward interracial couples despite a contrary innate unconscious belief or judgment. For politics and social matters like this, that triumph of the weak human conscious mind over our powerful unconscious mind is the rare exception, not the rule. For better or worse, that's just the nature of what evolution conferred on the human species in terms of how we see and think about what we think we see in the world.

Questions: Does the data show disgust, or is data obtained from the human brain simply not believable? Is it possible that our unconscious mind can be so powerful compared to our conscious thoughts and reason?

Book Review: Democracy For Realists

In their book, Democracy for Realists: Why Elections Do Not Produce Responsive Government, social scientists Christopher Achen and Larry Bartels (Princeton University Press, 2016) describe the major disconnect between what people believe democracy should be, what it really is and why it exists. The difference flows from human social and cognitive biology. That's no surprise. Human biology dictates that people's beliefs, perceptions and thinking about politics are usually more personal or subjective than objective and fact-based. 

 In democracies, the typical voter believes that people have preferences for what government should do and they pick leaders or vote their preferences in ballot initiatives. That then leads to majority preference becoming policy, which in turn, legitimizes government because the people consented through their votes. In that vision, government is ethical and has the people's interests at heart. That folk theory isn't how democracy works. The authors point out that the false definition leads to cynicism and unhappiness: “One consequence of our reliance on old definitions is that the modern American does not look at democracy before he defines it; he defines it first and then is confused by what he sees. We become cynical about democracy because the public does not act the way the simplistic definition of democracy says it should act, or we try to whip the public into doing things it does not want to do, is unable to do, and has too much sense to do. The crisis here is not a crisis in democracy but a crisis in theory.” That reflects the reality that people don’t or, because of their social and cognitive biology, can't pay enough attention to politics for the folk theory to work as people believe it should work. Humans are biologically too limited to truly understand what’s going on even if they tried. The authors put it like this: “. . . . the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.” 

Where have I seen this before? That describes reality based on what sentient humans can reasonably do. It's not a criticism of the human condition. Democracy and all or nearly all issues in politics are far too complex for voters to rationally deal with based on facts and unbiased reason. Instead, we have to simplify reality and apply heavily biased reason (common sense) to what we think we see. For the most part, what we believe we see is more illusion than objective reality. The authors acknowledge the problem: “The result may not be very comfortable or comforting. Nonetheless, we believe that a democratic theory worthy of serious social influence must engage with the findings of modern social science.” Although Democracy For Realists dissects popular democratic theory and analyzes science and historical data from the last hundred years or so, the exercise is about analyzing the role of human social and cognitive biology in democracy. Our false beliefs about democracy are shaped by human biology, not political theory. The authors research finds that the most important driver of voter belief and behavior is personal social or group identity, not ideology or theory.

For most voters, race, tribe and clan are more important than anything else. That manifests as irrational voter thinking and behavior. For example, the “will of the people” that’s central to the folk theory is a mostly a myth. People are divided on most everything and they usually don’t know what they really want. Average voters usually do not have enough knowledge to rationally make such determinations. For example, voter opinions can be very sensitive to variation in how questions are worded. This reflects a powerful unconscious bias called framing effects. For example, in one 1980’s survey, about 64% said there was too little federal spending on “assistance to the poor” but only about 23% said that there was too little spending on “welfare.” The 1980s was the decade when vilification of “welfare” was common from the political right. The word welfare had been co-opted and reframed as a bad thing. Similarly, before the 1991 Gulf War, about 63% said they were willing to “use military force”, but less than 50% were willing to “engage in combat”, while less than 30% were willing to “go to war.” The subjective nature of political concepts is obvious, i.e., assistance vs. welfare and military force vs. combat vs. war. What was the will of the people? One can argue that serving the will of the people under the folk theory of democracy is more chasing phantom than doing the obvious. 

Other aspects of voter behavior also make serving the people's will difficult at best. For example, voters are usually irrational about rewarding and punishing politicians for their performance in office. Incumbents are routinely punished at the polls for floods, drought, offshore shark attacks on swimmers, a recent local university football team's loss and, more importantly, when things are going badly in the last few months of the politicians current term in office. Where's the logic in any of that? Why should an incumbent worry about the people's will, when the people don't reward or punish on that basis? Incentives matter. Achen and Bartels show that there are sound biological reasons for why elections don't produce responsive governments.

Questions: Is the vision of democracy that Achen and Bartels portray reasonably accurate, nonsense or something else? If their vision is reasonably accurate, what, if anything can or should average voters do? Or, is what we have the about best that can be expected from the subjective (personal) biological basis of human social and cognitive biology? Is trying to understand and serve the will of the people the highest calling of democratic governments, or, would something else such as serving the "public interest"** constitute a better focus?

** Defined, for example, here: http://dispol.blogspot.com/2015/12/serving-public-interest.html

The Human Species Greatest Threat

The human species faces a number of threats that could damage civilization or, in the worst case, lead to extinction. A major nuclear war would at least significantly damage civilization. At least hundreds of millions of people would die. Polluting human activity could initiate a chain reaction that leads to a toxic environment and possibly human extinction. Various climate change episodes that caused mass land animal extinctions are known, e.g., anoxic events and the Permian-Triassic extinction event or Great Dying of about 252 million years ago. Given incomplete human knowledge, it is possible that human activity could trigger such an event without human awareness until it is too late to save the species. If humans do wind up damaging or destroying modern civilization or even annihilating the human species, the ultimate cause would necessarily come from some sort of human behavior that is at least theoretically avoidable. The question is, what is mankind's greatest survival threat? This discussion excludes threats that humans simply cannot affect or prevent, e.g., a mass extinction caused by eruption of a supervolcano.

Volcanic eruption - a micro-pipsqueak compared to a supervolcano blast The human cognition threat: From a cognitive and social science point of view, the greatest threat lies in the nature of human cognition and the irrational politics it engenders. That directly reflects human biology. In turn, that directly reflects the intellectual firepower that evolution endowed the human species with. Whatever mental capacity humans have as individuals and when acting in groups or societies, it was undeniably sufficient to get humans to where we are today. The unanswered question is whether what evolution resulted in is sufficient to survive our technology and ability to kill ourselves off. Under the circumstances, humanity's greatest threat lies in the psychology of being human. The very nature of human sentience and the individual and group behavior that flow therefrom are the seeds of human self-annihilation. If, when and why the seeds might sprout are open questions. Nonetheless, the seeds are real and viable. Within the last century, research from cognitive, social and other relevant branches of science proved that all humans are driven mostly by our unconscious minds, which are intuitive-emotional-moral. In terms of politics and religion, output from our unconscious minds are not mostly fact- and logic-based. Powerful unconscious biases heavily affect what little we wind up becoming consciously aware of. As a consequence, we are not primarily driven by objective fact or logic. Instead, (i) false perceptions of reality or facts, and (ii) conscious thinking (reason or common sense) that is heavily influenced by powerful unconscious biases drives thought, belief and behavior. Although we are sentient and conscious, unconscious (intuitive-emotional-moral) mental bandwidth or thinking is 100 million to 100 billion times more powerful than conscious thought. For better or worse, the human mental constitution dominated by unconscious intuitive knowledge and thought was sufficient for modern humans to survive and dominate. None of that is a criticism of humans or their intellectual makeup. Those are objective facts based on modern science. That biology applies to politics and it always has. In other words, politics is mostly irrational and based on false information, conscious thinking (common sense) that is heavily biased by unconscious personal beliefs and morals and evolutionary biases that all humans share. Misinformation is easy to acquire and very hard to reject, especially when it rejecting it undermines personal ideology, belief or morals. Often or usually, there is insufficient information or situations are too complex or opaque for true objectivity. The unconscious human mind nonetheless has to act in the face of that. In their 2016 book Democracy For Realists: Why Elections Do Not Produce responsive Governments, social scientists Christopher Achen and Larry Bartels summarized the human condition in politics like this: “. . . . the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.” All modern societies operate under some form of government and political system. What nations, societies, groups and individuals do and don't do is governed by human biology. That is mostly governed by our heavily biased, unconscious perceptions of reality (facts) and thinking. That irrationality, disconnection from reality and associated group behavior, including a lack of empathy toward outsiders, is where the greatest threat to the human species resides. https://uploads.disquscdn.com/images/08ac2aa91bc4949a3f76eb5e07aebd51f89bb8bae7fc73a1b5295c3e48a32e56.jpg

Questions: Is humanity's greatest threat the imperfect cognitive and social biology that underpins politics? If it is, can our weak, usually deceived and misinformed conscious minds do anything to change the status quo? Or, as some cognitive and social scientists at least imply, are humans destined to never rise much above their innate cognitive and social biology, leaving the fate of the human species up to irrational biology?

Book Review: The User Illusion

In The User Illusion: Cutting Consciousness Down To Size (Penguin Books, 1991, English translation 1998), Danish science writer Tor Norretranders dissects the powerful illusion that humans believe that what they see and think is accurate or real. The User Illusion (TUI) relentlessly describes human consciousness and the biological basis for the false realities that we believe are real. TUI is about the constraints on knowledge. The 2nd law of thermodynamics and the curse of always increasing disorder (entropy), information theory and mathematics all make it clear that all sentient beings in the universe operate under severe information constraints. That includes the limits on the human mind. To believe otherwise is a mistake, or more accurately, an illusion.

TUI’s chapter 6, The Bandwidth of Consciousness, gets right to the heart of matters. Going there is an enlightening but humbling experience. When awake, the information flow from human sensory nerves to the brain is about 11.2 million bits per second, with the eyes bringing in about 10 million bits per second, the skin about 1 million bits per second, with the ears and nose each bringing in about 100,000 thousand bits per second. That’s a lot, right? No, it isn’t. The real world operates at unknowable trillions of gigabits/second, so what we see or perceive isn’t much. It’s puny, actually. Fortunately, humans needed only enough capacity to survive, not to know the future 10 or 100 years in advance or to see a color we can’t see through human eyes with just three different color sensing cell types (red, green, blue). For human survival, three colors was good enough. Evidence of evolutionary success is a planet population of about 7.4 billion humans that’s rapidly heading toward 8 billion. 

Given that context, that 11.2 million bits/second may sound feeble but things are much weirder than just that. The 11.2 million bits/second are flowing into our unconscious minds. We are not conscious of all of that. So, what is the bandwidth of consciousness? How much of the 11.2 million bits/second we sense do we become aware of? The answer is about 1-50 bits/second. That’s the estimated rate at which human consciousness processes the information it is aware of. Silently reading this discussion consumes about 45 bits/second, reading aloud consumes about 30 bits/second, multiplying and adding two numbers consumes about 12 bits/second, counting objects consumes about 3 bits/second and distinguishing between different degrees of taste sweetness consumes about 1 bit/second.

What’s going on here??: It’s fair to ask what's really going on and why does our brain operate this way. The answer to the last question is that (i) it’s all that was needed to survive, and (ii) the laws of nature and the nature of humans, which are severely limited in data processing capacity. The human brain is large relative to body size but nonetheless only it processes information at a maximum rate of about 11.2 million bits/second, most of which we never become consciously aware of. That's human bandwidth because that’s what evolution resulted in. What’s going on is our unconscious mind taking in information at about 11.2 million bits/second, discarding or withholding from consciousness what’s not important or needed, which is about 50 bits/second or less, and then presenting the little trickle of important information to consciousness. That’s how much conscious bandwidth (consciousness) that humans needed to survive, e.g., to finagle sex, spot and run away from a hungry saber tooth cat before being eaten, find or hunt food, or do whatever was needed to survive. In modern times, our mental bandwidth is sufficient to do modern jobs, build civilization and advance human knowledge. Where things get very strange is in the presentation of the little trickle to consciousness. Discussing that step is a different discussion, but a glimpse of it as applied to politics is in the Democracy for Realists book review. This discussion focuses on the human brain operating system and the inputs and outputs it deals with and creates.

If one accepts the veracity of the science and Norretrander’s narrative, it is fair to say that the world that humans think they see is more illusion than real. Other chapters of TUI and the science behind the observations reinforce this reality of human cognition and its limits. For example, chapter 9, The Half-Second Delay, describes how our unconscious minds make decisions about 0.5 second before we become aware of what it is we have unconsciously decided. Although there's room for some disagreement about it, we consciously believe that we made a decision about 0.5 second before we became aware of it. Current data suggests that decisions can be made unconsciously about 7 to 10 seconds before we're aware of the decision. We trick ourselves. In other words, we operate under an illusion that our conscious mind makes decisions when that's the exception. The rule is that our unconscious minds are calling the shots most of the time. When it comes to perceiving reality, the low-bandwidth signal the brain uses to create a picture is a simulation that we routinely mistake for reality. As Norretranders sees it, consciousness is a fraud. That’s the user illusion.

Monday, August 6, 2018

Cognitive Science: Reason as a Secular Moral

Monday, August 6, 2018 https://uploads.disquscdn.com/images

A 2016 peer-reviewed paper by psychologist Tomas Ståhl and colleagues at the University of Illinois at Chicago and the University of Exeter suggests that some people see reason and evidence as a secular moral issue. Those people tend to consider the rationality of another's beliefs as evidence of their morality or lack thereof.

According to the paper’s abstract: “In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. . . . Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5).” Ståhl T, Zaal MP, Skitka LJ (2016) Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue. PLoS ONE 11(11): e0166332.doi:10.1371/journal.pone.0166332.

 According to Ståhl’s paper, “People who moralize rationality should not only respond more strongly to irrational (vs. rational) acts, but also towards the actors themselves. . . . . a central finding in the moral psychology literature is that differences in moral values and attitudes lead to intolerance. For example, the more morally convicted people are on a particular issue (i.e., the more their stance is grounded in their fundamental beliefs about what is right or wrong), the more they prefer to distance themselves socially from those who are attitudinally dissimilar.”

ScienceDaily commented on the paper: moral rationalists see less rational individuals as “less moral; prefer to distance themselves from them; and under some circumstances, even prefer them to be punished for their irrational behavior . . . . By contrast, individuals who moralized rationality judged others who were perceived as rational as more moral and worthy of praise. . . . While morality is commonly linked to religiosity and a belief in God, the current research identifies a secular moral value and how it may affect individuals' interpersonal relations and societal engagement.” ScienceDaily also noted that “in the wake of a presidential election that often kept fact-checkers busy, Ståhl (the paper’s lead researcher) says the findings would suggest a possible avenue to more productive political discourse that would encourage a culture in which it is viewed as a virtue to evaluate beliefs based on logical reasoning and the available evidence. . . . . ‘In such a climate, politicians would get credit for engaging in a rational intellectually honest argument . . . . They would also think twice before making unfounded claims, because it would be perceived as immoral.’”

Since most people believe they are mostly or always quite rational, it seems reasonable to argue that rationality is a moral issue. The finding of personal value for evidence-based rational thinking about political issues suggests it be a possible basis for a political principle or moral value in political ideology.