Pragmatic politics focused on the public interest for those uncomfortable with America's two-party system and its way of doing politics. Considering the interface of politics with psychology, cognitive science, social behavior, morality and history.
Tuesday, November 15, 2022
Book Review: Expert Political Judgment
I do not pretend to start with precise questions. I do not think you can start with anything precise. You have to achieve such precision as you can, as you go along. — Bertrand Russell, philosopher commenting on the incremental nature of progress in human knowledge and understanding
“People for the most part dislike ambiguity . . . . people find it hard to resist filling in the missing data points with ideologically scripted event sequences. . . . People for the most part also dislike dissonance . . . . [but] policies that one is predisposed to detest sometimes have positive effects . . . . regimes in rogue states may have more popular support than we care to admit -- dominant options that beat all the alternatives are rare.”
“The core function of political belief systems is not prediction; it is to promote the comforting illusion of predictability.”
“Human performance suffers because we are, deep down, deterministic thinkers with an aversion to probabilistic strategies that accept the inevitability of error. We insist on looking for order in random sequences.”
“. . . . we have yet to confront the most daunting of all the barriers to implementation [of an objective system to evaluate expert performance]: the reluctance of professionals to participate. If one has carved out a comfortable living under the old regime of close-to-zero accountability for one’s pronouncements, one would have to be exceptionally honest or masochistic to jeopardize so cozy an arrangement by voluntarily exposing one’s predications to the rude shock of falsification.”
“Human nature being what it is, and the political system creating the perversely self-justifying incentives that it does, I would expect, in short order, faux rating systems to arise that shill for the representatives of points of view who feel shortchanged by even the most transparent evaluation systems that bend over backward to be fair. The signal-to-noise ratio will never be great in a cacophonously pluralistic society such as ours.” -- Philip E. Tetlock, Expert Political Judgment, 2005
Context: For the most part, this channel is devoted to advocacy for a new, science-based political ideology and set of morals that recognize and accept human cognitive and social biology as sources of (i) disconnects from reality (facts), and reason (logic), and (ii) unwarranted inefficiency, unwarranted intolerance, unwarranted distrust, unwarranted conflict and etc. To this observer's knowledge, this book is the single best source of data for proof of the power of political ideology to distort fact and logic. Measuring expert competence (or more accurately, incompetence) is this book's sole focus.
Book review: Social psychologist Philip Tetlock's 2005 book, Expert Political Judgment: How Good Is It? How Can We Know?, summarizes about 20 years of his research into the question of whether it is even possible to reliably measure how good expert opinions are, and if so, how good are they. For his research, Tetlock focused mostly on measuring the accuracy of thousands of expert predictions about global events to see if that could afford a way to measure competence of expert opinion.
After a massive research effort, two answers came back: (1) Yes, their opinions can be measured for accuracy, and (2) all experts are dreadful. Tetlock's research shows that a key reason experts rise to the level of expert is because (i) they are fluid in simplifying problems and solutions and (ii) their presentations sound authoritative. But for the most part, they're wrong about 80-90% of the time. In other words, expert opinions are about the same as opinions of average people. In fact, there's barely any statistically detectable difference between most experts and random guessing. That's how good our experts, pundits, politicians and other assorted blowhards really are, i.e., they're worse than worthless. That assessment of more bad than good includes the damage, waste, social discord and loss of moral authority that flows from experts being wrong most of the time. One cannot be fair about this if one ignores mistakes.
Arrrgh!! The computers are coming!: Another mind-blowing observation came from Tetlock's use of several algorithms to see how well computers do compared to human experts. The data was sobering. One simple algorithm performed the same as human experts. No big deal. But, more sophisticated models, autoregressive distributed lag, performed about 2.5-fold better than the very best humans. That is a massive difference in competence. Tetlock commented: “whereas the best human forecasters were hard-pressed to predict more than 20 percent of the total variability in outcomes…, the generalized autoregressive distributed lag models explained on average 47 percent of the variance.” One can imagine that with time, algorithms will be improved to do better.
Tetlock doesn't advocate replacing humans with computers. He is suggesting that when a validated algorithm is available, experts would be well-advised to use it and take what it says into account. That seems perfectly reasonable.
Foxes and Hedgehogs: Tetlock identifies two basic mindsets and their cognitive approach to analyzing issues and making predictions, liberals and conservatives. The liberal mindset, the Foxes, to a small but real degree, does better than the conservative mindset, the Hedgehogs. Hedgehog thinking can be accurate depending on the issue at hand. But over a range of issues, its focus on key values or concepts limit its capacity to do well in the long run. By contrast the Fox mindset is more fluid and less ideologically constrained. Regarding political ideology, Tetlock comments: “The core function of political belief systems is not prediction; it is to promote the comforting illusion of predictablity.”
Regarding motivated reasoning or cognitive dissonance: “People for the most part dislike dissonance, a generalization that particularly applies to the Hedgehogs . . . . They prefer to organize the world into evaluative gestalts that couple good causes to good effects and bad to bad. Unfortunately, the world can be a morally messy place . . . . regimes in rogue states may have more popular support than we care to admit -- Dominant options that beat the alternatives on all possible dimensions -- are rare.”
Does some of that sound at least vaguely familiar? It ought to.
Why do bad experts persist?: Tetlock's data shows that bad experts persist for a range of reasons:
1. No one keeps track of their performance over time and they're never held accountable for mistakes. No one measures and grades experts (except Tetlock).
2. They are expert at explaining away their mistakes, sometimes incoherently, e.g., (i) I was almost right, (ii) I was wrong, but for the right reasons, (iii) that intervening event was unforseeable, it's not my fault, (iv) etc.
3. They appeal to people's emotions and biases that make them appear right, even when there is plenty of evidence that they are wrong.
4. The unconscious hindsight bias leads most experts to believe they did not make their past mistakes, i.e., they deny they guessed wrong and instead firmly believe their prediction was correct.
5. Experts are expert at couching their predictions in language that makes measuring accuracy impossible, e.g., (i) they don't specify by what time their predictions will come to pass, (ii) they use soft language that really doesn't amount to a firm prediction, ‘it is likely that X will happen’ without specifying the odds or what ‘likely’ means.
6. Etc.
Tetlock's book is not easy to read. It could be part of a college course in social psychology or political science. The data is often expressed in terms of statistics. Nonetheless, there is more than enough general language for the lay reader with a high school education to fully understand the book's main point about the discomfortingly rare expert competence in politics.
When it comes to politics, Tetlock isn't naïve: “Human nature being what it is, and the political system creating the perversely self-justifying incentives that it does, I expect, in short order, faux rating systems to arise that shill for the representatives of points of view who feel shortchanged by even the most transparent evaluation systems that bend over backward to be fair. The signal-to-noise ratio will never be great in a cacophonously pluralistic society like ours.”
Remember, that was 2005. This is 2018. The weak signal is fading in the increasing roar of blithering noise in the form of lies, deceit, character assassination, unwarranted fear mongering and other forms of nonsense.
Question: Was Tetlock's 2005 prediction that faux rating systems would arise in ‘short order’ to hype the reputation of inept experts mostly correct, or, has it sufficed for dissatisfied people to simply deny the existing ratings systems are credible?
Note: In 2017, Tetlock published a second edition. The first chapter is here.
B&B orig: 2/12/18; DP 8/7/19
No comments:
Post a Comment