Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Friday, January 1, 2016

Superforecasting: Book review



Book Review
Superforecasting: The Art & Science of Prediction

What most accurately describes the essence of intelligent, objective, public service-oriented politics? Is it primarily an honest competition among the dominant ideologies of our times, a self-interested quest for influence and power or a combination of the two? Does it boil down to understanding the biological functioning of the human mind and how it sees and thinks about the world? Or, or is it something else entirely?

Turns out, it isn’t even close. Superforecasting comes down squarely on the side of getting the biology right. Everything else is a distant second.

Superforecasting: The Art & Science of Prediction, written by Philip E. Tetlock and Dan Gardener (Crown Publishers, September 2015), describes Tetlock’s ongoing research into asking what factors, if any, can be identified that contribute to a person’s ability to predict the future. In Superforecasting, Tetlock asks how well average but intellectually engaged people can do compared to experts, including professional national security analysts with access to classified information. What Tetlock and his team found was that the interplay between dominant, unconscious, distortion-prone intuitive human cognitive processes (“System 1” or the “elephant” as described before) and less-influential but conscious, rational processes (“System 2” or the “rider”) was a key factor in how well people predicted future events.

Tetlock observes that a “defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. . . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”

It turns out, that with minimal training and the right mind set, some people, “superforecasters”, routinely trounce the experts. Based on a 4-year study, the “Good Judgment Project”, funded by the DoD’s Intelligence Advanced Research Projects Agency, about 2,800 volunteers made over a million predictions on topics that ranged from potential conflicts between countries to currency fluctuations. Those predictions had to be, and were, precise enough to be analyzed and scored.

About 1% of the 2,800 volunteers turned out to be superforecasters who beat national security analysts by about 30% at the end of the first year. One even beat commodities futures markets by 40%. The superforecaster volunteers did whatever they could to get information, but they nonetheless beat professional analysts who were backed by computers and programmers, spies, spy satellites, drones, informants, databases, newspapers, books and whatever else that lots of money can buy. As Tetlock put it, “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”

What makes them so good?
The top 1-2% of volunteers were carefully assessed for personal traits. In general, superforecasters tended to be people who were eclectic about collecting information and open minded in their world view. They were also able to step outside of themselves and look at problems from an “outside view.” To do that they searched out and aggregated other perspectives, which goes counter to the human tendency to seek out only information that confirms what we already know or want to believe. That tendency is an unconscious bias called confirmation bias. The open minded trait also tended to reduce unconscious System 1 distortion of problems and potential outcomes by other unconscious cognitive biases such as the powerful but very subtle and hard to detect “what you see is all there is” bias, hindsight bias and scope insensitivity, i.e., not giving proper weight to the scope of a problem.

Superforecasters tended to break complex questions down into component parts so that relevant factors could be considered separately, which also tends to reduce unconscious bias-induced fact and logic distortions. In general, superforecaster susceptibility to unconscious biases was significantly lower than for other participants. That appeared to be due mostly to their capacity to use conscious System 2 thinking to recognize and then reduce unconscious System 1 biases. Most superforecasters shared 15 traits including (i) cautiousness based on an innate knowledge that little or nothing was certain, (ii) being reflective, i.e., introspective and self-critical, (iii) being comfortable with numbers and probabilities, and (iv) being pragmatic and not wedded to any particular agenda or ideology. Unlike political ideologues, they were pragmatic and did not try to “squeeze complex problems into the preferred cause-effect templates [or treat] what did not fit as irrelevant distractions.”

What the best forecasters knew about a topic and their political ideology was far less important than how they thought about problems, gathered information and then updated thinking and changed their minds based on new information. The best engaged in an endless process of information and perspective gathering, weighing information relevance and questioning and updating their own judgments when it made sense. It was work that required effort and discipline. Political ideological rigor was detrimental, not helpful.

Regarding common superforecaster traits, Tetlock observed that “a brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.” Superforecasters have a real capacity for self-critical thinking. Political, economic and religious ideology is mostly beside the point.

Why this is important
The topic of predicting the future might seem to some to have little relevance and/or importance to politics and political policy. That belief is wrong. Tetlock cites an example that makes the situation crystal clear. In an interview in 2014 with General Michael Flynn, head of the Defense Intelligence Agency, DoD’s 17,000 employee equivalent to the CIA, Gen. Flynn said “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” A quick Google search of the phrase “global conflict trends” and some reading was all it took to prove that belief was wrong.

Why did Gen. Flynn, a high-ranking, intelligent and highly accomplished intelligence analyst make such an important, easily-avoided mistake? The answer lies in System 1 and its powerful but unconscious “what you see is all there is” (WYSIATI) bias. He succumbed to his incorrect belief because he spent 3-4 hours every day reading intelligence reports filled with mostly bad news. In Ge. Flynn’s world, that was all there was. In Flynn’s unconscious mind, his knowledge had to be correct and he therefore didn’t bother to check his basic assumption. Most superforecasters would not have made that mistake. They train themselves to relentlessly pursue information from multiple sources and would have found what Google had to say about the situation. 

Tetlock asserts that partisan pundits opining on all sorts of things routinely fall prey to the WYSIATI bias for the same reason. They frequently don’t check their assumptions against reality and/or will knowingly lie to advance their agendas. Simply put, partisan pundits are frequently wrong because of their ideological rigidity and the intellectual sloppiness it engenders. 

Limits and criticisms of forecasting
In Superforecasting, Tetlock points out that predicting the future has limits. Although Tetlock is not explicit about this, forecasting most questions for time frames more than about 18-36 months in the future appears to become increasingly less accurate and fade into randomness. That makes sense, given complexity and the number of factors that can affect outcomes. Politics and the flow of human events are simply too complicated for long-term forecasting to ever be feasible. What is not known is the ultimate time range where the human capacity to predict fades into the noise of randomness. More research is needed.

A criticism of Tetlock’s approach argues that humans simply cannot foresee things and events that are so unusual that they are not even considered possible until the event or thing is actually seen or happens. Such things and events, called Black Swans, are also believed to dictate major turning points and therefore even trying to predict the future is futile. Tetlock rebuts that criticism, arguing that, (i) there is no research to prove or disprove that hypothesis and (ii) clustered small relevant questions can collectively point to a Black Swan or something close to it. The criticism does not yet amount to a fatal flaw - more research is needed.

Another criticism argues that superforecasters operating in a specified time frame, 1-year periods in this case, are flukes and they cannot defy psychological gravity for long. Instead, the criticism argues that superforecasters will simply revert to the mean and settle back to the ground the rest of us stand on. In other words, they would become more or less like everyone else with essentially no ability to predict future events.

The Good Judgment Project did allow testing of that criticism. The result was the opposite of what the criticism predicted. Although some faded, many of the people identified as superforecasters at the end of year 1 actually got better in years 2 and 3 of the 4-year experiment. Apparently, those people not only learned to limit the capacity of their unconscious System 1 (the elephant) to distort fact and logic, but they also consciously maintained that skill and improved on how the conscious but rational System 2 (the rider) was able to counteract the fact- and logic-distorting lenses of unconscious System 1 biases. Although the mental effort needed to be objective was high, most superforecasters could nonetheless defy psychological gravity, at least over a period of several years.

The intuitive-subjective politics problem
On the one hand, Tetlock sees a big upside for “evidence-based policy”: “It could be huge - an “evidence-based forecasting” revolution similar to the “evidence-based medicine” revolution, with consequences every bit as significant.” On the other hand, he recognizes the obstacle that intuitive or subjective (System 1 biased), status quo two-party partisan politics faces: “But hopelessly vague language is still so common, particularly in the media, that we rarely notice how vacuous it is. It just slips by. . . . . If forecasting can be co-opted to advance their [narrow partisan or tribe] interests, it will be. . . . . Sadly, in noisy public arenas, strident voices dominate debates, and they have zero interest in adversarial collaboration.”

The rational-objective politics theoretical solution
For evidence-based policy, Tetlock sees the Holy Grail of his research as “. . . . using forecasting tournaments to depolarize unnecessarily polarized policy debates and make us collectively smarter.” He asserts that consumers of forecasting need to “stop being gulled by pundits with good stories and start asking pundits how their past predictions fared - and reject answers that consist of nothing but anecdotes and credentials. And forecasters will realize . . . . that these higher expectations will ultimately benefit them, because it is only with the clear feedback that comes with rigorous testing that they can improve their foresight.”

What Tetlock is trying to do for policy will be uncomfortable for most standard narrow ideology ideologues. That’s the problem with letting unbiased fact and logic roam free - they will go wherever they want without much regard for people’s personal ideologies or morals. For readers who follow Dissident Politics (“DP”) and its focus on “objective politics” or ideology based on unbiased fact and unbiased logic in service to an “objectively” defined public interest, this may sound like someone has plagiarized someone else. It should. DP’s cognitive science-based ideology draws heavily on the work of social scientists including Dr. Tetlock, Daniel Khaneman, George Lakoff and Richard Thaler. Both Tetlock and DP advocate change via focusing policy and politics on understanding human biology and unspun reality, not political ideology or undue attention for special interest demands.

Tetlock focuses on evidence-based policy, while DP’s focus is on evidence-based or “objective” politics. Those things differ somewhat, but not much. In essence, Tetlock is trying to coax pundits and policy makers into objectivity based on human cognitive science and higher competence by asking the public and forecast consumers to demand better from the forecasters they rely on to form opinions and world views. DP is trying to coax the public into objectivity by adopting a new, “objective” political ideology or set of morals based on human cognitive science. The hope is that over time both average people and forecasters will see the merits of objectivity. If widely accepted, either approach will eventually get society to about the same place. More than one path can lead to the same destination, which is politics based as much on the biology of System 2 cognition as the circumstances of American politics will allow.

One way to see it is an effort to elevate System 2’s capacity to enlighten over System 1’s awesome power to hide and distort fact and logic. Based on Tetlock’s research, optimal policy making, and by extension, optimal politics, does not boil down to being more conservative, liberal, capitalist, socialist or Christian. Instead, it is a matter of finding an optimum balance in the distribution of mental influence between the heavily biased intuition-subjectivity of unconscious System 1 and less-biased reason-objectivity of conscious System 2, aided by statistics or an algorithm when a good one is available.

That optimum balance won’t lead to perfect policy or politics. But, the result will be significantly better in the long run than what the various, usually irrational, intuitive or subjective mind sets deliver now.

The rational-objective politics practical problem
For the attentive, the big problem has already jumped out as obvious. Tetlock concedes the point: “. . . . nothing may change. . . . . things may go either way.” Whether the future will be a “stagnant status quo” or change “will be decided by the people whom political scientists call the “attentive public.” I’m modestly optimistic.” Not being a forecaster but an experiment instead, DP does not know the answer. Tetlock and DP both face the same problem - how to foster the spread and acceptance of an idea or ideology among members of a species that tends to be resistant to change and biased against questioning innate morals and beliefs.

In essence, what Tetlock and DP both seek is replacing blind faith in personal political morals or ideology and unreasonable influence to narrow special interests with a new ideology grounded in understanding of human cognitive biology and respect for unbiased facts and unbiased logic. The goal of a biology-based political ideology or set of morals is to better serve the public interest. Those narrow interests include special interests and individuals who see the world through the distorting lenses of standard subjective "narrow" ideologies such as American liberalism, conservatism, socialism, capitalism and/or Christianity.

There is some reason for optimism that citizens who adopt such objective political morals or values can come to have significant influence in American politics. Tetlock points to one observer, an engineer with the following observation: "'I think it's going to get stranger and stranger' for people listen to the advice of experts whose views are informed only by their subjective judgment." Only time will tell if any optimism is warranted. Working toward more objectivity in politics is an experiment whose outcome cannot yet be predicted, at least not by DP. Maybe one of Tetlock's superforecasters would be up to that job.

No comments:

Post a Comment