Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Friday, January 15, 2016

Thinking, Fast and Slow; Book Review



Book review: Thinking, Fast and Slow
Daniel Kahneman
Farrar, Straus and Giroux, New York
Original publication: 2011

Dr. Kahneman, a psychologist, won a Nobel Prize in economics for his work on prospect theory, in which he began to generate a more accurate description of the biological basis of decision-making. That work is directly relevant to politics. The biology behind seeing and thinking distorts reality and that shapes political decision-making or policy choices.

Kahneman’s book is based on several decades of research by himself and other social scientists. It focuses on the contrast between two ways of perceiving reality and thinking about what we think we see. Those modes of cognition can be simultaneous and overlapping. Kahneman’s research led to his recognition of a mental "system 1" and "system 2". The two systems do not necessarily correspond to different parts or bits of brain, but are used to describe what is going on in our heads. System 1 is an instinctive, intuitive-emotional and fast way of seeing the world and thinking about it. System 1 operates mostly unconsciously and without conscious effort. Although we don’t know it, system 1 usually dominates our mental activity, perceptions of reality, judgments and choices.

Not nearly as rational as we think
By contrast, Kahneman’s system 2 is slower, but more logical and calculating. System 2 requires biologically measurable work, tires easily, and is lazy, preferring to do the least work needed to get at a solution, even if it’s wrong. System 2 is the conscious part of human cognition and what we are aware of when we look at the world or political issues and consciously think about them. Applying this logical aspect of human cognition requires motivation and conscious effort. Because this mode of thinking is what we are aware of, people tend to believe that our conscious, “rational” thoughts constitute the main or only way we think. For most people, this aspect of our biology fosters a hard to reject but false belief that we are quite rational and well-grounded in reality.

Thinking in system 1 and system 2 is shaped by powerful innate but unconscious biases that distort both facts (reality) and logic (common sense). In that regard, our innate biases can be considered to be “de-rationalizing” because they prevent us from seeing unbiased reality and applying unbiased common sense to what we think we see. Our innate biases powerfully shape policy choices to fit personal ideology and/or morals. Kahneman’s book describes the research that reveals the basis that lead people to place too much confidence in human judgment, including their own.

Biases with real bite
The list of unconscious, de-rationalizing cognitive biases is long and surprising. They include:

  • Kahneman’s powerful “what you see is all there is” bias (the “illusion of validity”) that leads to perceptions and choices (i) based on what we directly see, (ii) not based on relevant information we are not aware of, both of which (iii) tends to kill our motivation to look for information we are not aware of, especially information that could contradict what we believe or want to be true;
  • Framing choices that lead to different perceptions and choices depending simply on how an issue is presented, the effects of which alter judgments even though the underlying facts are identical regardless of how the issue or problem is framed;
  • An unconscious bait and switch bias that unknowingly substitutes an easy, intuitively answerable question for a hard one that requires conscious effort and System 2 logic, which reflects system 2 laziness;  
  • Loss aversion, a tendency to irrationally prefer avoiding losses over gains by unreasonably overweighting potential losses and underweighting potential gains;
  • An energy-based judgment bias from being hungry or having low blood sugar, which affects judgments - e.g., it affects sentencing decisions that theoretically impartial judges hand out to convicts; and
  • An illusion of understanding situations or issues even when a person doesn’t have enough information to understand. Humans fit past events or facts into a “logical” story and then we believe we understand, and can’t imagine things differently. The human mind is highly adept at (i) unconsciously and rapidly making “sense” out of things based on insufficient information and (ii) drawing often flawed judgments based thereon.


Being subjective-intuitive and irrational isn’t hopeless . . . .
For people looking for more objectivity and rationality in politics, human biology sounds like a very big impediment, and it is. Fortunately, that isn’t the whole story. Political ideologues whose personal ideology or morals distort facts and logic can become aware of their own reality-distorting biology and that self-awareness helps reduce the influence of irrational biases on perceptions of reality and common sense. Progress toward objectivity requires the moral courage to accept our biology for what it actually is, not what we think it is.

. . . . but change isn’t going to be easy
One unusually self-aware political ideologue explained the difficulty this way: “My libertarian beliefs have not always served me well. Like most people who hold strong ideological convictions, I find that, too often, my beliefs trump the scientific facts. This is called motivated reasoning, in which our brain reasons our way to supporting what we want to be true. Knowing about the existence of motivated reasoning, however, can help us overcome it when it is at odds with evidence.”

Although Khaneman is silent on the issue, susceptibility to cognitive distortion may vary between different political or ideological groups. That supposition generally accords with research showing that more intense personal ideological belief impairs judgment. The American public is in a period of increasing ideological rigidity and political polarization. That is an impediment to acceptance of political objectivity. 

Another impediment is the two-party system itself. Both parties, most of their politicians, most partisan pundits, most of the press-media most of the time and players with campaign contribution money, foster an image of partisan political rationality and opposition irrationality. Fostering strongly-held partisan ideological beliefs feeds our unconscious biases. Playing on our illusions of rationality serves to defend and maintain the status quo, e.g., it hides the usually tenuous to nonexistent connection between political rhetoric and reality and logic. That doesn’t serve the public interest.

If they want objectivity, which is an open question, the American people have a lot of introspecting and learning to do. Objectivists have their work cut out for them and a long, long way to go.

Wednesday, January 13, 2016

Rationalizing political policy-making

Politics is at least as irrational as it is rational — probably more irrational than not. That fact is supported by solid evidence. Simply listening to politicians on both sides makes it clear that the two endlessly warring sides see very different realities and facts in almost every issue they deal with. The two sides apply very different kinds of logic or common sense to what they think they see and they routinely wind up supporting policies that are mutually exclusive.

One side or both can be more right than wrong about any given issue. However, its very hard to imagine both sides being mostly about disputed issues but easy to see that they can both be more wrong than right, assuming there is an objective (not personal or subjective) measure of right and wrong (there isn’t).

A lot like religion
That’s just simple logic. Given the lack of definitions for even basic concepts, e.g., the public interest or what is constitutional and what isn’t, partisan liberal vs. conservative disputes can be seen as akin to religious disputes. People have debated for millennia about which God is the “real” God or what the real God’s words really mean. For religious disputes, there is neither evidence nor defined terms of debate. That makes religious disagreements unresolvable and pointless unless the combatants just happen to decide to agree on something.

Political disagreements are a lot like that. They are usually based on (i) little or no evidence and (ii) subjective personal perceptions of reality and personal ideology or morals. That makes most political disputes unresolvable and pointless. Americans have been bickering for centuries about what the Founding Fathers would have wanted or done about most everything. Those disputes will continue for centuries.

Evidence of innate, unconscious human irrationality about politics from academic research is overwhelming. Humans see and think about the world and issues through a lens of personal ideology or morals and unconscious biases. Unfortunately, personal lenses are powerful fact and logic distorters. When experts are carefully scrutinized and evaluated, their ability to see future events is poor, about the same as random guessing. Some people are exceptions and have real talent, but for the most part expert predictions of future events and policy outcomes are useless.

An easy solution . . . .
Happily, there is a simple way to inject more objectivity and rationality into politics. It amounts to consciously gathering and analyzing data to test policy choices to see how well they do once implemented. That has been suggested from time to time in various contexts, e.g., as an experiment in state’s rights or as policy-making modeled on random controlled trials that are used in medicine to test the safety and efficacy of a new drug or clinical treatment protocol. It really is that simple: just collect data and analyze it and use comparison groups and/or policy variants when it is feasible to do so. Politics can be made more rational if there is a will to it.

. . . that cannot be implemented
Sadly, the easy solution is impossible to implement under America’s deranged two-party system and its corrupt, incompetent brand of partisan politics. In a recent article advocating a random controlled trial approach (RCT) to policy-making, the Economist articulated the implementation problem:
“The electoral cycle is one reason politicians shun RCTs. Rigorous evaluation of a new policy often takes years; reformers want results before the next election. Most politicians are already convinced of the wisdom of their plans and see little point in spending time and money to be proved right. Sometimes they may not care whether a policy works, as long as they are seen to be doing something.”
Evidence from social science research is clear that politicians and experts who are convinced of their own wisdom are far more likely to be wrong than right most of the time, if not always. Finding a solution to that little self-delusion conundrum is a necessary prelude to implementing the obvious, simple solution.

Friday, January 1, 2016

Superforecasting: Book review



Book Review
Superforecasting: The Art & Science of Prediction

What most accurately describes the essence of intelligent, objective, public service-oriented politics? Is it primarily an honest competition among the dominant ideologies of our times, a self-interested quest for influence and power or a combination of the two? Does it boil down to understanding the biological functioning of the human mind and how it sees and thinks about the world? Or, or is it something else entirely?

Turns out, it isn’t even close. Superforecasting comes down squarely on the side of getting the biology right. Everything else is a distant second.

Superforecasting: The Art & Science of Prediction, written by Philip E. Tetlock and Dan Gardener (Crown Publishers, September 2015), describes Tetlock’s ongoing research into asking what factors, if any, can be identified that contribute to a person’s ability to predict the future. In Superforecasting, Tetlock asks how well average but intellectually engaged people can do compared to experts, including professional national security analysts with access to classified information. What Tetlock and his team found was that the interplay between dominant, unconscious, distortion-prone intuitive human cognitive processes (“System 1” or the “elephant” as described before) and less-influential but conscious, rational processes (“System 2” or the “rider”) was a key factor in how well people predicted future events.

Tetlock observes that a “defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. . . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”

It turns out, that with minimal training and the right mind set, some people, “superforecasters”, routinely trounce the experts. Based on a 4-year study, the “Good Judgment Project”, funded by the DoD’s Intelligence Advanced Research Projects Agency, about 2,800 volunteers made over a million predictions on topics that ranged from potential conflicts between countries to currency fluctuations. Those predictions had to be, and were, precise enough to be analyzed and scored.

About 1% of the 2,800 volunteers turned out to be superforecasters who beat national security analysts by about 30% at the end of the first year. One even beat commodities futures markets by 40%. The superforecaster volunteers did whatever they could to get information, but they nonetheless beat professional analysts who were backed by computers and programmers, spies, spy satellites, drones, informants, databases, newspapers, books and whatever else that lots of money can buy. As Tetlock put it, “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”

What makes them so good?
The top 1-2% of volunteers were carefully assessed for personal traits. In general, superforecasters tended to be people who were eclectic about collecting information and open minded in their world view. They were also able to step outside of themselves and look at problems from an “outside view.” To do that they searched out and aggregated other perspectives, which goes counter to the human tendency to seek out only information that confirms what we already know or want to believe. That tendency is an unconscious bias called confirmation bias. The open minded trait also tended to reduce unconscious System 1 distortion of problems and potential outcomes by other unconscious cognitive biases such as the powerful but very subtle and hard to detect “what you see is all there is” bias, hindsight bias and scope insensitivity, i.e., not giving proper weight to the scope of a problem.

Superforecasters tended to break complex questions down into component parts so that relevant factors could be considered separately, which also tends to reduce unconscious bias-induced fact and logic distortions. In general, superforecaster susceptibility to unconscious biases was significantly lower than for other participants. That appeared to be due mostly to their capacity to use conscious System 2 thinking to recognize and then reduce unconscious System 1 biases. Most superforecasters shared 15 traits including (i) cautiousness based on an innate knowledge that little or nothing was certain, (ii) being reflective, i.e., introspective and self-critical, (iii) being comfortable with numbers and probabilities, and (iv) being pragmatic and not wedded to any particular agenda or ideology. Unlike political ideologues, they were pragmatic and did not try to “squeeze complex problems into the preferred cause-effect templates [or treat] what did not fit as irrelevant distractions.”

What the best forecasters knew about a topic and their political ideology was far less important than how they thought about problems, gathered information and then updated thinking and changed their minds based on new information. The best engaged in an endless process of information and perspective gathering, weighing information relevance and questioning and updating their own judgments when it made sense. It was work that required effort and discipline. Political ideological rigor was detrimental, not helpful.

Regarding common superforecaster traits, Tetlock observed that “a brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.” Superforecasters have a real capacity for self-critical thinking. Political, economic and religious ideology is mostly beside the point.

Why this is important
The topic of predicting the future might seem to some to have little relevance and/or importance to politics and political policy. That belief is wrong. Tetlock cites an example that makes the situation crystal clear. In an interview in 2014 with General Michael Flynn, head of the Defense Intelligence Agency, DoD’s 17,000 employee equivalent to the CIA, Gen. Flynn said “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” A quick Google search of the phrase “global conflict trends” and some reading was all it took to prove that belief was wrong.

Why did Gen. Flynn, a high-ranking, intelligent and highly accomplished intelligence analyst make such an important, easily-avoided mistake? The answer lies in System 1 and its powerful but unconscious “what you see is all there is” (WYSIATI) bias. He succumbed to his incorrect belief because he spent 3-4 hours every day reading intelligence reports filled with mostly bad news. In Ge. Flynn’s world, that was all there was. In Flynn’s unconscious mind, his knowledge had to be correct and he therefore didn’t bother to check his basic assumption. Most superforecasters would not have made that mistake. They train themselves to relentlessly pursue information from multiple sources and would have found what Google had to say about the situation. 

Tetlock asserts that partisan pundits opining on all sorts of things routinely fall prey to the WYSIATI bias for the same reason. They frequently don’t check their assumptions against reality and/or will knowingly lie to advance their agendas. Simply put, partisan pundits are frequently wrong because of their ideological rigidity and the intellectual sloppiness it engenders. 

Limits and criticisms of forecasting
In Superforecasting, Tetlock points out that predicting the future has limits. Although Tetlock is not explicit about this, forecasting most questions for time frames more than about 18-36 months in the future appears to become increasingly less accurate and fade into randomness. That makes sense, given complexity and the number of factors that can affect outcomes. Politics and the flow of human events are simply too complicated for long-term forecasting to ever be feasible. What is not known is the ultimate time range where the human capacity to predict fades into the noise of randomness. More research is needed.

A criticism of Tetlock’s approach argues that humans simply cannot foresee things and events that are so unusual that they are not even considered possible until the event or thing is actually seen or happens. Such things and events, called Black Swans, are also believed to dictate major turning points and therefore even trying to predict the future is futile. Tetlock rebuts that criticism, arguing that, (i) there is no research to prove or disprove that hypothesis and (ii) clustered small relevant questions can collectively point to a Black Swan or something close to it. The criticism does not yet amount to a fatal flaw - more research is needed.

Another criticism argues that superforecasters operating in a specified time frame, 1-year periods in this case, are flukes and they cannot defy psychological gravity for long. Instead, the criticism argues that superforecasters will simply revert to the mean and settle back to the ground the rest of us stand on. In other words, they would become more or less like everyone else with essentially no ability to predict future events.

The Good Judgment Project did allow testing of that criticism. The result was the opposite of what the criticism predicted. Although some faded, many of the people identified as superforecasters at the end of year 1 actually got better in years 2 and 3 of the 4-year experiment. Apparently, those people not only learned to limit the capacity of their unconscious System 1 (the elephant) to distort fact and logic, but they also consciously maintained that skill and improved on how the conscious but rational System 2 (the rider) was able to counteract the fact- and logic-distorting lenses of unconscious System 1 biases. Although the mental effort needed to be objective was high, most superforecasters could nonetheless defy psychological gravity, at least over a period of several years.

The intuitive-subjective politics problem
On the one hand, Tetlock sees a big upside for “evidence-based policy”: “It could be huge - an “evidence-based forecasting” revolution similar to the “evidence-based medicine” revolution, with consequences every bit as significant.” On the other hand, he recognizes the obstacle that intuitive or subjective (System 1 biased), status quo two-party partisan politics faces: “But hopelessly vague language is still so common, particularly in the media, that we rarely notice how vacuous it is. It just slips by. . . . . If forecasting can be co-opted to advance their [narrow partisan or tribe] interests, it will be. . . . . Sadly, in noisy public arenas, strident voices dominate debates, and they have zero interest in adversarial collaboration.”

The rational-objective politics theoretical solution
For evidence-based policy, Tetlock sees the Holy Grail of his research as “. . . . using forecasting tournaments to depolarize unnecessarily polarized policy debates and make us collectively smarter.” He asserts that consumers of forecasting need to “stop being gulled by pundits with good stories and start asking pundits how their past predictions fared - and reject answers that consist of nothing but anecdotes and credentials. And forecasters will realize . . . . that these higher expectations will ultimately benefit them, because it is only with the clear feedback that comes with rigorous testing that they can improve their foresight.”

What Tetlock is trying to do for policy will be uncomfortable for most standard narrow ideology ideologues. That’s the problem with letting unbiased fact and logic roam free - they will go wherever they want without much regard for people’s personal ideologies or morals. For readers who follow Dissident Politics (“DP”) and its focus on “objective politics” or ideology based on unbiased fact and unbiased logic in service to an “objectively” defined public interest, this may sound like someone has plagiarized someone else. It should. DP’s cognitive science-based ideology draws heavily on the work of social scientists including Dr. Tetlock, Daniel Khaneman, George Lakoff and Richard Thaler. Both Tetlock and DP advocate change via focusing policy and politics on understanding human biology and unspun reality, not political ideology or undue attention for special interest demands.

Tetlock focuses on evidence-based policy, while DP’s focus is on evidence-based or “objective” politics. Those things differ somewhat, but not much. In essence, Tetlock is trying to coax pundits and policy makers into objectivity based on human cognitive science and higher competence by asking the public and forecast consumers to demand better from the forecasters they rely on to form opinions and world views. DP is trying to coax the public into objectivity by adopting a new, “objective” political ideology or set of morals based on human cognitive science. The hope is that over time both average people and forecasters will see the merits of objectivity. If widely accepted, either approach will eventually get society to about the same place. More than one path can lead to the same destination, which is politics based as much on the biology of System 2 cognition as the circumstances of American politics will allow.

One way to see it is an effort to elevate System 2’s capacity to enlighten over System 1’s awesome power to hide and distort fact and logic. Based on Tetlock’s research, optimal policy making, and by extension, optimal politics, does not boil down to being more conservative, liberal, capitalist, socialist or Christian. Instead, it is a matter of finding an optimum balance in the distribution of mental influence between the heavily biased intuition-subjectivity of unconscious System 1 and less-biased reason-objectivity of conscious System 2, aided by statistics or an algorithm when a good one is available.

That optimum balance won’t lead to perfect policy or politics. But, the result will be significantly better in the long run than what the various, usually irrational, intuitive or subjective mind sets deliver now.

The rational-objective politics practical problem
For the attentive, the big problem has already jumped out as obvious. Tetlock concedes the point: “. . . . nothing may change. . . . . things may go either way.” Whether the future will be a “stagnant status quo” or change “will be decided by the people whom political scientists call the “attentive public.” I’m modestly optimistic.” Not being a forecaster but an experiment instead, DP does not know the answer. Tetlock and DP both face the same problem - how to foster the spread and acceptance of an idea or ideology among members of a species that tends to be resistant to change and biased against questioning innate morals and beliefs.

In essence, what Tetlock and DP both seek is replacing blind faith in personal political morals or ideology and unreasonable influence to narrow special interests with a new ideology grounded in understanding of human cognitive biology and respect for unbiased facts and unbiased logic. The goal of a biology-based political ideology or set of morals is to better serve the public interest. Those narrow interests include special interests and individuals who see the world through the distorting lenses of standard subjective "narrow" ideologies such as American liberalism, conservatism, socialism, capitalism and/or Christianity.

There is some reason for optimism that citizens who adopt such objective political morals or values can come to have significant influence in American politics. Tetlock points to one observer, an engineer with the following observation: "'I think it's going to get stranger and stranger' for people listen to the advice of experts whose views are informed only by their subjective judgment." Only time will tell if any optimism is warranted. Working toward more objectivity in politics is an experiment whose outcome cannot yet be predicted, at least not by DP. Maybe one of Tetlock's superforecasters would be up to that job.

Thursday, December 31, 2015

Assessing personal risk from terrorism

IVN published a Dissident Politics article on the very real difficulty of rationally assessing personal risks from terrorism (and other threats). The personal risk of death from a terrorist attack on any given American in any given year is very low, about 1 in 20 million. Despite that low risk, over 50% of Americans who planned to travel recently changed their plans by canceling, changing or delaying their travel.

The reason is that the unconscious human mind, which controls reactions to fear, does not use statistics to assess risk and thus we unconsciously but grossly overestimate risk. Thirty percent of average Americans believe that their personal chance of dying from a terrorist attack is 100% (1 in1), not 1 in 20 million, of being killed by a terrorist in the next 12 months. In other words, 30% of Americans believe they personally will be attacked within the next year, which amounts to an incorrectly perceived 100% or 1:1 chance. Based on the statistics, that wildly incorrect belief in the likelihood of personal attack in the next year is 20 million times too high. However, that is perfectly reasonably, not too high, by the "logic" of false but persuasive, unconscious human physiological or 'psycho-logic', not real or statistics-based, unbiased logic.

In view of the data, not anyone's opinion, it is objectively irrational for anyone to change their travel plans unless a specific, credible threat exists. Despite that fact (not opinion) many people nonetheless do change their behavior despite no significant threat.[1]

The bottom line is that to think and act rationally or objectively about risk, the rational mind has to impose statistics into our unconscious thinking when it is relevant. Most people simply don't do that. The press-media and politicians foster irrational emotions such as this kind of unfounded fear and, indirectly, that fosters the irrational thinking and actions that flow from such fear. Under those circumstances, it is no wonder that Americans overreact - they are being deceived into misunderstanding by a self-serving two-party system, including the press-media industry, that benefits more from public misunderstanding than from understanding.

The article is here.

Footnote:
1. A "significant threat" is defined as a threat that has more than a 1 in 10,000 chance of actually happening in the time frame and under the conditions in which the threat is perceived, e.g., within a 1-year period for threat of personal terrorist attack. That is a Dissident Politics definition. There is no widely accepted definition for a "significant threat", so that is how Dissident Politics (DP) defines it to make what DP argues make any sense at all. No doubt, some or many others will define it as zero per year, less than a 1 in 10,000/year or something lower. Given the reality of American society, that makes little objective sense.

The lack of definitions for most all terms in politics is why most all political debate/discourse is mostly meaningless and intellectually, mostly useless. The great selling point of such empty debate is that undefined terms in political debate reinforces the beliefs of people who want to believe what they want to believe instead of belief in what is real. Like it or not, most people easily and unconsciously distort reality, including actual threats of personal risk because that is just how the human mind evolved.