Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Monday, March 25, 2019

The Science of Morality & Human Well-Being

March 25, 2019


Nihilism: 1. the rejection of all religious and moral principles, in the belief that life is meaningless; 2. belief that all values are baseless and that nothing can be known or communicated

 In the last few months, some commentary here and elsewhere have raised the idea that many concepts related to politics, concepts relating to concepts such as good and evil, fact and non-fact, logic and illogic, and truth and lie are essentially meaningless. Meaninglessness arises from subjectivity that can be inherent in things one might think of as mostly objective. For example, some people believe it is a fact that there is a strong consensus among expert climate scientists that anthropogenic global warming is real. About 27% of Americans reject that as false and no amount of discussion and citing fact sources will change most (~ 98% ?) of those minds.

 Does that mean there is no way to discern facts or truth from lies or misinformation? When it comes to morality, is nihilism basically correct and contemplating morality from any point of view is too subjective to be meaningful in any way?

  In another example, the rule of law concept is seen by some analysts as an essentially contested concept, which is something subjective and not definable such that a large majority of people will agree on what the rule of law is and when it applies. If the rule of law cannot be defined, how can what is moral and what isn't be defined?

Pragmatic rationalism: The anti-bias ideology advocated here, “pragmatic rationalism”, is built on four core moral values, (1) respect for objective facts and truth, to the extent they can be ascertained, (2) application of less biased logic (conscious reasoning) to the facts and truths, (3) service to the public interest, which is conceived as a transparent competition of ideas constrained by facts and logic, and (4) reasonable compromise in view of political, social and other relevant factors. If nihilism is correct, the anti-bias ideology is nonsense.

Science and morality: In his 2010 book, The Moral Landscape: How Science Can Determine Human Values, neuroscientist Sam Harris argues there can be enough objectivity in matters of morals and human behavior and well-being that there is a great deal of objectivity in morality. In essence, Harris is arguing that science can find things that foster human well-being by tending to make people, e.g., happy, unhappy, and socially integrated or not. On morals, religion, secularism and the role of science in discovering morality, Harris writes:
On the first account, to speak of moral truth is, of necessity, to invoke God; on the second, it is merely to give voice to one’s apish urges, cultural biases and philosophical confusion. My purpose is to persuade you that both sides in this debate are wrong. The goal of this book is to begin a conversation about how moral truth can be understood in the context of science.
While the argument I make in this book is bound to be controversial, it rests on a very simple premise: human well-being entirely depends on events in the world and on states of the human brain. A more detailed understanding of these truths will force us to draw clear distinctions between different ways of living in society with one another, judging some to be better or worse, more or less true to the facts, and more or less ethical. I am not suggesting that we are guaranteed to resolve every moral controversy through science. Differences of opinion will remainbut opinions will be increasingly constrained by facts.
Does our inability to gather the relevant data oblige us to respect all opinions equally? Of course not. In the same way, the fact that we may not be able to resolve specific moral dilemmas does not suggest that all competing responses to them are equally valid. In my experience, mistaking no answers in practice for no answers in principle is a great source of moral confusion.
The the deeper point is that there simply must be answers to questions of this kind, whether we know them or not. And these are not areas where we can afford to respect the “traditions” of others and agree to disagree. . . . . I hope to show that when we are talking about values, we are actually talking about an interdependent world of facts.
There are facts to be understood about how thoughts and intentions arise in the human brain; there are further facts to be known about how these behaviors influence the world and the experience of other conscious beings. We will see that facts of this sort will exhaust what we can reasonably mean by terms like “good” and “evil”. They will increasingly fall within the purview of science and run far deeper than a person’s religious affiliation. Just as there is no such thing as Christian physics or Muslim algebra, we will see that there is no such thing as Christian or Muslim morality. Indeed, I will argue that morality should be considered an undeveloped branch of science. 
Having received tens of thousands of emails and letters from people at every point on the continuum between faith and doubt, I can say with some confidence that a shared belief in the limitations of reason lies at the bottom of these cultural divides. Both sides [Christian conservatives and secular liberals] believe that reason is powerless to answer the most important questions in human life.
The scientific community’s reluctance to take a stand on moral issues has come at a price. It has made science appear divorced, in principle, from the most important questions of human life.
It seems inevitable, however, that science will gradually encompass life’s deepest questions. How we respond to the resulting collision of worldviews will influence the the progress of science, of course, but may also determine whether we succeed in building global civilization based on shared values. . . . . Only a rational understanding of human well-being will allow billions of us to coexist peacefully, converging on the same social, political, economic and environmental goals. A science of human flourishing may seem a long way off, but to achieve it, we must first acknowledge that the intellectual terrain actually exists.
Harris is right, nihilism is wrong: If Harris is correct that intellectual moral terrain actually exists and is subject to scientific scrutiny, then pragmatic rationalism would seem to be a political counterpart of Harris’ vision of what can lead to human well-being for the long run. Maybe because of personal bias and/or the amazingly good fit between what Harris argues and the core moral values that pragmatic rationalism is built on, Harris is right. Science can shed light on an at least somewhat objective vision of right and wrong, good and evil. Nihilism is wrong and destructive of both self and civilization.

Saturday, March 23, 2019

What the unconscious mind thinks of interracial marriage

Saturday, March 23, 2019

The human mind operates simultaneously on two fundamentally different tracts, unconscious thinking and conscious thinking. Recent estimates accord unconscious thinking with about 95-99.9% of human mental bandwidth, decision-making influence or "firepower." The rest is our conscious thinking. Conscious and unconscious thinking or decisions can be in conflict. The split human reaction to interracial marriage is a case in point.

A recent Washington Post article describes brain responses to photos of same-race and different race married heterosexual couples. The article is based on data published in a recent paper in the Journal of Experimental Social Psychology "Yuck, you disgust me! Affective bias against interracial couples"

Brain scans of people who claim to have no disapproval of or bias against interracial couples show disgust or disapproval of photos that show interracial couples (black and white), but not photos of black couples or white couples.

According to the WP article, "Researchers found that the insula, a part of the brain that registers disgust, was highly active when participants viewed the photos of the interracial couples, but was not highly engaged when viewers saw the images of same-race couples, whether they were white or black."

This shows the possibility of disconnects between what the unconscious mind sees, thinks and decides, i.e., disgust toward interracial couples, and what the conscious mind sees, thinks and decides, i.e., acceptance of interracial couples.

What may be unusual about this difference of opinion is that, at least for young people (college student volunteers in this case), the conscious mind dictates personal belief and behavior toward interracial couples despite a contrary innate unconscious belief or judgment. For politics and social matters like this, that triumph of the weak human conscious mind over our powerful unconscious mind is the rare exception, not the rule.

For better or worse, that's just the nature of what evolution conferred on the human species in terms of how we see and think about what we think we see in the world.

Questions: Does the data show disgust, or is data obtained from the human brain simply not believable? Is it possible that our unconscious mind can be so powerful compared to our conscious thoughts and reason?

The human species' greatest threat

Saturday, March 23, 2019


The human species faces a number of threats that could damage civilization or, in the worst case, lead to extinction. A major nuclear war would at least significantly damage civilization. At least hundreds of millions of people would die. Polluting human activity could initiate a chain reaction that leads to a toxic environment and possibly human extinction. Various climate change episodes that caused mass land animal extinctions are known, e.g., anoxic events and the Permian-Triassic extinction event or Great Dying of about 252 million years ago. Given incomplete human knowledge, it is possible that human activity could trigger such an event without human awareness until it is too late to save the species.

If humans do wind up damaging or destroying modern civilization or even annihilating the human species, the ultimate cause would necessarily come from some sort of human behavior that is at least theoretically avoidable. The question is, what is mankind's greatest survival threat?

This discussion excludes threats that humans simply cannot affect or prevent, e.g., a mass extinction caused by eruption of a supervolcano.

The human cognition threat: From a cognitive and social science point of view, the greatest threat lies in the nature of human cognition and the irrational politics it engenders. That directly reflects human biology. In turn, that directly reflects the intellectual firepower that evolution endowed the human species with. Whatever mental capacity humans have as individuals and when acting in groups or societies, it was undeniably sufficient to get humans to where we are today.

The unanswered question is whether what evolution resulted in is sufficient to survive our technology and ability to kill ourselves off.

Under the circumstances, humanity's greatest threat lies in the psychology of being human. The very nature of human sentience and the individual and group behavior that flow therefrom are the seeds of human self-annihilation. If, when and why the seeds might sprout are open questions. Nonetheless, the seeds are real and viable.

Within the last century, research from cognitive, social and other relevant branches of science proved that all humans are driven mostly by our unconscious minds, which are intuitive-emotional-moral. In terms of politics and religion, output from our unconscious minds are not mostly fact- and logic-based. Powerful unconscious biases heavily affect what little we wind up becoming consciously aware of. As a consequence, we are not primarily driven by objective fact or logic. Instead, (i) false perceptions of reality or facts, and (ii) conscious thinking (reason or common sense) that is heavily influenced by powerful unconscious biases drives thought, belief and behavior.

Although we are sentient and conscious, unconscious (intuitive-emotional-moral) mental bandwidth or thinking is 100 million to 100 billion times more powerful than conscious thought. For better or worse, the human mental constitution dominated by unconscious intuitive knowledge and thought was sufficient for modern humans to survive and dominate.

None of that is a criticism of humans or their intellectual makeup. Those are objective facts based on modern science.

That biology applies to politics and it always has. In other words, politics is mostly irrational and based on false information, conscious thinking (common sense) that is heavily biased by unconscious personal beliefs and morals and evolutionary biases that all humans share.

Misinformation is easy to acquire and very hard to reject, especially when it rejecting it undermines personal ideology, belief or morals. Often or usually, there is insufficient information or situations are too complex or opaque for true objectivity. The unconscious human mind nonetheless has to act in the face of that. In their 2016 book Democracy For Realists: Why Elections Do Not Produce responsive Governments, social scientists Christopher Achen and Larry Bartels summarized the human condition in politics like this:

“. . . . the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.”

All modern societies operate under some form of government and political system. What nations, societies, groups and individuals do and don't do is governed by human biology. That is mostly governed by our heavily biased, unconscious perceptions of reality (facts) and thinking. That irrationality, disconnection from reality and associated group behavior, including a lack of empathy toward outsiders, is where the greatest threat to the human species resides.

Questions: Is humanity's greatest threat the imperfect cognitive and social biology that underpins politics? If it is, can our weak, usually deceived and misinformed conscious minds do anything to change the status quo? Or, as some cognitive and social scientists at least imply, are humans destined to never rise much above their innate cognitive and social biology, leaving the fate of the human species up to irrational biology?

B&B orig: 8/16/16

Book Review: The User Illusion

Saturday, March 23, 2019

In The User Illusion: Cutting Consciousness Down To Size (Penguin Books, 1991, English translation 1998), Danish science writer Tor Norretranders dissects the powerful illusion that humans believe that what they see and think is real. The User Illusion (TUI) relentlessly describes human consciousness and the false reality that we believe is real. TUI is about the constraints on knowledge. The 2nd law of thermodynamics and the curse of always increasing disorder (entropy), information theory and mathematics all make it undisputable that everything sentient in the universe operates under severe constraints. That includes all forms of life and the limits on the human mind. To believe otherwise is a mistake, or more accurately, an illusion.

TUI’s chapter 6, The Bandwidth of Consciousness, gets right to the heart of matters. Going there is an enlightening but humbling experience. When awake, the information flow from human sensory nerves to the brain is about 11.2 million bits per second, with the eyes bringing in about 10 million bits per second, the skin about 1 million bits per second, with the ears and nose each bringing in about 100,000 thousand bits per second. That’s a lot, right? No, it isn’t. The real world operates at unknowable trillions of gigabits/second, so what we see or perceive isn’t much. It’s puny, actually.

But, remember, we needed only enough capacity to survive, not to know the future 10 or 100 years in advance or to see a color we can’t see using eyes with just three different color sensing cell types. For human survival, three colors was good enough. Evidence of evolutionary success is a planet population of 7 billion humans that’s rapidly heading toward 8 billion.

Although that 11.2 million bits/second may sound feeble, things are much, much weirder than just that. The 11.2 million bits/second are flowing into our unconscious minds. We are not conscious of all of that. So, what is the bandwidth of consciousness? How much of the 11.2 million bits/second we sense do we become aware of? The answer is an even more humbling 1-50 bits/second. At least, that’s the estimated rate at which human consciousness processes the information it is aware of. Silently reading this discussion consumes about 45 bits/second, reading aloud consumes about 30 bits/second, multiplying and adding two numbers consumes about 12 bits/second, counting objects consumes about 3 bits/second and distinguishing between different degrees of taste sweetness consumes about 1 bit/second.

What’s going on here??: It’s fair to ask what's really going on and why does our brain operate this way. The answer to the last question is that (i) it’s all that was needed to survive, and (ii) the laws of nature and the nature of biological organisms like humans are simply limited in what they can do. The human brain is large relative to body size but nonetheless only it processes information at a maximum rate of about 11.2 million bits/second because that’s what evolution conferred.

The more interesting question is what’s going on? What’s going on is that our unconscious mind takes in information at about 11.2 million bits/second, discards what’s not important or needed, which is about 11.2 million bits/second minus about 50 bits/second and then presents the little trickle of important information to consciousness. That’s how much bandwidth humans need, e.g., for finagling sex, spotting and running away from a hungry saber tooth cat before being eaten, finding or hunting food, or whatever was needed to survive.

Where things get very, very strange is in the presentation of the little trickle to consciousness. Discussing that step is a different discussion, but a glimpse of it as applied to politics is in the Democracy for Realists book review. This discussion focuses on the human brain operating system and the inputs and outputs it deals with and creates.

If one accepts the veracity of the science and Norretrander’s narrative, it is fair to say that the world that humans think they see is more illusion than real. Other chapters of TUI and the science behind the observations reinforce this reality of human cognition and its limits. For example, chapter 9, The Half-Second Delay, describes how our unconscious minds make decisions about 0.5 second before we become aware of what it is we have unconsciously decided. Despite that, we consciously believe that we made a decision about 0.5 second before we became aware of it. We trick ourselves.

In other words, we operate under an illusion that our conscious minds is making decisions when in fact that is the rare exception. The rule is that our unconscious minds are calling the shots most of the time. When it comes to perceiving reality, the low-bandwidth signal the brain uses to create a picture is a simulation that we routinely mistake for reality. As Norretranders sees it, consciousness is a fraud. That’s the user illusion.

B&B orig: 8/20/16

Book Review: Democracy For Realists

Saturday, March 23, 2019

In their book, Democracy for Realists: Why Elections Do Not Produce Responsive Government, social scientists Christopher Achen and Larry Bartels (Princeton University Press, 2016) describe the major disconnect between what people believe democracy should be, what it really is and why it exists. The difference flows from human social and cognitive biology.

That's no surprise. Human biology dictates that people's beliefs, perceptions and thinking about politics are usually more personal or subjective than objective and fact-based.

In democracies, the typical voter believes that people have preferences for what government should do and they pick leaders or vote their preferences in ballot initiatives. That then leads to majority preference becoming policy, which in turn, legitimizes government because the people consented through their votes. In that vision, government is ethical and has the people's interests at heart.

That folk theory isn't how democracy works. The authors point out that the false definition leads to cynicism and unhappiness: “One consequence of our reliance on old definitions is that the modern American does not look at democracy before he defines it; he defines it first and then is confused by what he sees. We become cynical about democracy because the public does not act the way the simplistic definition of democracy says it should act, or we try to whip the public into doing things it does not want to do, is unable to do, and has too much sense to do. The crisis here is not a crisis in democracy but a crisis in theory.”

That reflects the reality that people don’t or, because of their social and cognitive biology, can't pay enough attention to politics for the folk theory to work as people believe it should work. Humans are biologically too limited to truly understand what’s going on even if they tried. The authors put it like this: “. . . . the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.”

That describes reality based on what sentient humans can reasonably do. It's not a criticism of the human condition. Democracy and all or nearly all issues in politics are far too complex for voters to rationally deal with based on facts and unbiased reason. Instead, we have to simplify reality and apply heavily biased reason (common sense) to what we think we see. For the most part, what we believe we see is more illusion than objective reality.

The authors acknowledge the problem: “The result may not be very comfortable or comforting. Nonetheless, we believe that a democratic theory worthy of serious social influence must engage with the findings of modern social science.”

Although Democracy For Realists dissects popular democratic theory and analyzes science and historical data from the last hundred years or so, the exercise is about analyzing the role of human social and cognitive biology in democracy. Our false beliefs about democracy are shaped by human biology, not political theory. The authors research finds that the most important driver of voter belief and behavior is personal social or group identity, not ideology or theory. For most voters, race, tribe and clan are more important than anything else.

That manifests as irrational voter thinking and behavior. For example, the “will of the people” that’s central to the folk theory is a mostly a myth. People are divided on most everything and they usually don’t know what they really want. Average voters usually do not have enough knowledge to rationally make such determinations.

For example, voter opinions can be very sensitive to variation in how questions are worded. This reflects a powerful unconscious bias called framing effects. For example, in one 1980’s survey, about 64% said there was too little federal spending on “assistance to the poor” but only about 23% said that there was too little spending on “welfare.” The 1980s was the decade when vilification of “welfare” was common from the political right. The word welfare had been co-opted and reframed as a bad thing.

Similarly, before the 1991 Gulf War, about 63% said they were willing to “use military force”, but less than 50% were willing to “engage in combat”, while less than 30% were willing to “go to war.” The subjective nature of political concepts is obvious, i.e., assistance vs. welfare and military force vs. combat vs. war. What was the will of the people? One can argue that serving the will of the people under the folk theory of democracy is more chasing phantom than doing the obvious.

Other aspects of voter behavior also make serving the people's will difficult at best. For example, voters are usually irrational about rewarding and punishing politicians for their performance in office. Incumbents are routinely punished at the polls for floods, drought, offshore shark attacks on swimmers, a recent local university football team's loss and, more importantly, when things are going badly in the last few months of the politicians current term in office. Where's the logic in any of that?

Why should an incumbent worry about the people's will, when the people don't reward or punish on that basis? Incentives matter.

Achen and Bartels show that there are sound biological reasons for why elections don't produce responsive governments.

Questions: Is the vision of democracy that Achen and Bartels portray reasonably accurate, nonsense or something else? If their vision is reasonably accurate, what, if anything can or should average voters do? Or, is what we have the about best that can be expected from the subjective (personal) biological basis of human social and cognitive biology?
Is trying to understand and serve the will of the people the highest calling of democratic governments, or, would something else such as serving the "public interest"** constitute a better focus?

** Defined here: http://dispol.blogspot.com/2015/12/serving-public-interest.html

B&B orig: 8/21/16

Thursday, March 21, 2019

Are Some Platforms Wising Up to Lies and Propaganda?

Thursday, March 21, 2019

Last month, Pinterest initiated a policy of cracking down on anti-vaccine content. The NYT reported:
Pinterest, a digital platform popular with parents, took an unusual step to crack down on the proliferation of anti-vaccination propaganda: It purposefully hobbled its search box.

Type “vaccine” into its search bar and nothing pops up.

“Vaccination” or “anti-vax”? Also nothing. Pinterest, which allows people to save pictures on virtual pinboards, is often used to find recipes for picky toddlers, baby shower décor or fashion trends, but it has also become a platform for anti-vaccination activists who spread misinformation on social media.

But only Pinterest, as first reported by The Wall Street Journal, has chosen to banish results associated with certain vaccine-related searches, regardless of whether the results might have been reputable.
In another reaction to propaganda about vaccines, Amazon announced that it will remove some books that contain vaccine misinformation, while Facebook and YouTube are similarly moving to shut false information down on their platforms. The Washington Post writes:
YouTube said it was banning anti-vaccination channels from running online advertisements.

Facebook announced it was hiding certain content and turning away ads that contain misinformation about vaccines, and Pinterest said it was blocking “polluted” search terms, memes and pins from particular sites prompting anti-vaccine propaganda, according to news reports.

Amazon has now joined other companies navigating the line between doing business and censoring it, in an age when, experts say, misleading claims about health and science have a real impact on public health.

NBC News recently reported that Amazon was pulling books touting false information about autism “cures” and vaccines. The e-commerce giant confirmed Monday to The Washington Post that several books are no longer available, but it would not release more specific information.
Culture war explodes: People who believe false information and science including science of anthropogenic climate change have been adamant that their free speech rights includes the right to spread their views everywhere on an equal footing with real truth and established science. Proponents of false truth and false science vehemently argue they speak real truth and science to liberals, socialists, communists, corrupt corporations and other liars, deceivers and manipulators. Facebook, Amazon, Pinterest and other social media are privately owned and therefore they can choose what content they allow and disallow on their platforms. The point is this: Every person and company can choose to believe what is truth and valid science and what isn't. If a company chooses to block what it believes is lies and false science, that is its choice.

  Dark free speech (DFS) forced this war: The rise of dark free speech[1] forced this situation. American conservative and populist politics is heavily infused with DFS. Independent fact checkers constantly reinforce this fact.

 Whether these moves will significantly blunt the rise of DFS is unknowable. Maybe it is already too late. Regardless, these tentative steps are extremely welcome measures by the private sector in defense of liberal democracy, freedom and common decency. These mover are faint early signals that maybe significant portions of the private sector[2] in American is still on the side of truth, democracy, personal freedom and science.

 An obvious question is this: Should DFS be suppressible by private entities because it is legal speech? DFS in public speech fora cannot be suppressed because that violates 1st Amendment free speech rights.

  Footnotes:
 1. Dark free speech = lies, deceit, misinformation, unwarranted opacity, and fact and truth hiding, unwarranted emotional manipulation especially including fomenting unwarranted fear, rage, hate, intolerance, distrust, bigotry and racism, and etc.

 2. Obviously not including the carbon energy sectors who continue to deny climate science to protect their profit margins and political power.

Climate Change Warnings: Not Urgent Enough?

Thursday, March 21, 2019


 Over the last couple of weeks, there has been intense blowback here and elsewhere from people who deny AGW (anthropogenic global warming) is real after scientists reported that the level of confidence it is real is now very high. The data now supports a so called 5 sigma level of confidence in the data showing AGW is real.

 AGW skeptics dismiss the data with arguments including "blah, blah, blah" and the scientists are liars and faked their data. One AGW skeptic attack was an assertion of an unpublished, not peer-reviewed crackpot hypothesis by a scientist with zero peer-reviewed papers in climate science arguing that climate scientists are clueless about basic aspects of science. I finally got frustrated and banned the purveyor of the crackpot's theory after being accused of dishonesty, bias and whatnot. That raises a question:

  Question: When is there enough evidence in support of something like AGW, if ever, that even trying to discuss it with people who simply reject accepted evidence and expert opinion is more socially harmful than not? I refuse to allow this channel to be used as a platform for dark free speech such as lies and quack science, and anything else that strikes me as socially more harmful than helpful. Is that unreasonably arrogant or misguided?

  Complex adaptive systems: Things could be much worse: Also attacked and rejected as false was my assertion that there is about a 98% consensus among climate science experts that AGW is real. Long story short, that led me to look at a think tank skeptic who attacked the 98% expert consensus data as flawed and not believable. That led to this article by the Fraser Institute, Putting the 'con' in consensus; Not only is there no 97 per cent consensus among climate scientists, many misunderstand core issues. The article was written by Ross McKitrick, an economics professor at the University of Guelph, Canada.

  The Fraser Institute received a high fact accuracy rating and a center-right bias by the Media Bias/Fact Check site. Given that, I read his article, which was originally published in the Financial Post. Dr. McKitrick's article includes this:
The Intergovernmental Panel on Climate Change asserts the conclusion that most (more than 50 per cent) of the post-1950 global warming is due to human activity, chiefly greenhouse gas emissions and land use change. But it does not survey its own contributors, let alone anyone else, so we do not know how many experts agree with it. And the statement, even if true, does not imply that we face a crisis requiring massive restructuring of the worldwide economy. In fact, it is consistent with the view that the benefits of fossil fuel use greatly outweigh the climate-related costs.

One commonly cited survey asked if carbon dioxide is a greenhouse gas and human activities contribute to climate change. But these are trivial statements that even many IPCC skeptics agree with. And again, both statements are consistent with the view that climate change is harmless. So there are no policy implications of such surveys, regardless of the level of agreement.

Here is what the IPCC said in its 2003 report: “In climate research and modelling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
It made no sense to argue that (1) there are no policy implications in most experts agreeing with CO2 being a greenhouse gas and human activities contribute to climate change, and (2) long-term prediction of future climate states is not possible. If it is true that long-term prediction is impossible, which is necessarily true for a complex adaptive system like climate, then it is possible the climate situation could be much worse than what most experts now believe.

 I wrote to McKitrick as asked if it was possible that the climate situation could be worse than now believed. After an initial evasion, his answer was that it could be much worse than is now believed. There is simply no way to know. The climate situation could be much better, much worse or about what most experts now believe. This is the first time I recall any AGW skeptic acknowledging that the climate situation could be worse than it is now believed to be. Here is the email string:
Me: Dear Dr. McKitrick, Your article, Putting the con in consensus, made a couple of statements that, taken together, are unclear in their logic. The article states: "One commonly cited survey asked if carbon dioxide is a greenhouse gas and human activities contribute to climate change. But these are trivial statements that even many IPCC skeptics agree with. And again, both statements are consistent with the view that climate change is harmless. So there are no policy implications of such surveys, regardless of the level of agreement." Since (1) even IPCC skeptics agree that CO2 is a greenhouse gas and human activities contribute to climate change, and (2) both statements are consistent with the view that climate change is harmless, why isn't it also possible that the statements are consistent with the view that climate change is much worse than whatever the expert consensus is? Why is it only possible that the situation could be neutral, beneficial or trivially negative, but not significantly or even catastrophically underestimated? I cannot see the logic on this point. Given the apparent ambiguity, it is arguable there are enormous policy implications of the surveys. What am I missing here? What is the flaw in the logic of arguing the situation could be modestly or even much worse than expert consensus currently holds? Thank you for your time and consideration.

McKitrick: The point is that you can't say 97% think AGW is dangerous, as Obama and others assert. When 97% agreement is found, leaving aside the sampling problems, it is only on relatively trivial statements that are consistent with a wide range of views about the level of harm. I don't argue that 97% think AGW is not a problem, nor can we argue based on the surveys that 97% think the problem is worse than the IPCC states. Either statement goes well beyond what the surveys show, either because the questions weren't asked or if they were asked, the split was nothing like 97-3.

Me: Thanks for getting back. I appreciate it. Just so I understand you, it is possible that things could be very serious or at least significantly worse than is now often believed to be the case. That is consistent with a complex non-linear system being unpredictable.

McKitrick: Yes, that's in the range of what's possible.

Me: Thank you.
 My prior AGW post argued we are playing Russian roulette with the climate, civilization and maybe even the human species. If the unpredictability of climate as a complex adaptive system is correct, and there's no obvious reason to think otherwise, McKitrick is incorrect to claim that the survey data has no policy implications. We could be in a far worse climate situation than what most experts now believe.

 Based all the science, including the unpredictability problem, it is reasonable to believe that AGW skepticism is not defensible and is based on factors such as political ideology, personal bias, tribe identity and/or economic self-interest. One can also argue it is immoral. Is that logic and conclusion of immorality reasonable?

B&B orig: 3/6/19

Religious Logic: Trump is Cyrus

Thursday, March 21, 2019

 A 6 minute segment, Bully Idol, by Bill Maher explains the logic behind the belief by many Evangelical Christians that President Trump is a modern day Cyrus and was put in office by God. Maher's recitation of the facts and logic enlightens the basis for the gulf in perceptions of reality that is tearing America apart. Despite the comedy, the underlying facts and logic Maher describes are basically sound.

 https://youtu.be/rQBIBjbpzoQ

 B&B orig: 3/9/19

Free Will: Do We Have It Or Not?

Thursday, March 21, 2019

 The TED Radio Hour program that NPR aired yesterday, Hardwired, examines the matter of free will and factors that affect both behavior and health. The broadcast was in four 10-13 minute segments, which are here: https://www.npr.org/programs/ted-radio-hour/?showDate=2019-03-08

 In the first segment neuroscientist Robert Sapolsky argues that there is no such thing as human free will. He argues that that appears to be acts of free will are simply manifestations of biology we do not understand. Everything is predetermined and we simply live our lives according to factors and forces we cannot control and may never be able to fully understand. Sapolsky pointed to a famous experiment where judges set punishment for convicts. In that experiment, which I think has been questioned at least once, showed that punishments were most strongly correlated with how hungry the judges were, which correlated with lower blood sugar levels.

 In the 2nd segment, geneticist Moshe Szyf points to our genes as hardware that is mutable over time. He cites a situation where pregnant women were in a period of unusual stress for a period of time. This capacity of DNA to be chemically altered by experience amounts to an experiential identity. That identity arises from personal experiences that chemically changes the DNA of developing fetuses ('epigenetic' changes). Over the next 50 years, the babies subjected to stress developed more autoimmune diseases, metabolic diseases and autism than babies that were not subject to the same source of stress. As the stress level increased, so did the level of later disease.

 Referring to this and other research, Szyf argues that DNA is dynamic due to epigenetic changes from life experiences over time. He sees that at least some human free will can arise from the interactions between individuals and external influences such as family, language, culture and so forth. In his view, epigenetic DNA phenomena is a source of some free will. He points to lower levels of stress in modern life compared to life thousands of years ago as a major factor.

 In the 3rd segment, pediatrician Nadine Burke discusses how stress in children manifest as various problems including asthma, ADHD, skin rashes, autoimmune diseases, and so forth. She found a high correlation between traumatic stress (domestic violence, drug abuse, divorce, parental mental illness, etc.) and child health. Stress exerts influences after birth including susceptibility to diseases and risky behavior. That is consistent with life experiences exerting influence on behavior and health.

 In the 4th segment psychologist Brian Little argues that we are born with traits that constrain our free will. He sees behavior and free will arising from our genes (biogenic authenticity), social forces that constrain behavior (sociogenic authenticity), and what we make of ourselves over our lifetimes (idiogenic authenticity). The latter influence can be at odds with the one or both of the former and the confluence of the three make us unique, which he implies is a course of free will.

 On balance, the information presented here makes it sound like humans have, at most, little free will and what there is, is constrained. That is not a comforting conclusion. But is it correct? Is it too early to draw that conclusion, or is the science settled enough? If it is correct, what are the implications for politics?

 B&B orig: 3/10/19

The Biology Of Nationalism

Thursday, March 21, 2019


In an article in Foreign Policy magazine, This Is Your Brain on Nationalism: The Biology of Us and Them, neuroscientist Robert Sapolsky describes the cognitive biology of nationalism. A three minute interview by Fareed Zakaria with Sapolsky about this article and nationalism is here: https://www.facebook.com/fareedzakaria/videos/what-neuroscience-has-to-do-with-nationalism/1172179109608632/

 Humans have a strong impulse to sort people into us and them groups. Sorting happens unconsciously. It is fast, taking about one-tenth of a second, and occurs before we are aware of any assessment. A portion of the brain that regulates fear and aggression reacts quickly, and a few seconds later the region of the brain that is crucial for impulse control and emotional regulation (prefrontal cortex) activates and normally suppresses the initial negative impulse. The unconscious brain reaction to images of faces of people of another race are different than images of same-race faces.

 Sapolsky argues this is driven by evolution, which shaped how our brains perceive and think about sensory inputs from the world. He asserts that nationalism is a critically important phenomenon:
To understand the dynamics of human group identity, including the resurgence of nationalism—that potentially most destructive form of in-group bias—requires grasping the biological and cognitive underpinnings that shape them.

Such an analysis offers little grounds for optimism. Our brains distinguish between in-group members and outsiders in a fraction of a second, and they encourage us to be kind to the former but hostile to the latter. These biases are automatic and unconscious and emerge at astonishingly young ages. . . . . Humans can rein in their instincts and build societies that divert group competition to arenas less destructive than warfare, yet the psychological bases for tribalism persist, even when people understand that their loyalty to their nation, skin color, god, or sports team is as random as the toss of a coin. At the level of the human mind, little prevents new teammates from once again becoming tomorrow’s enemies.
One aspect of our cognitive biology is that biases against out-groups is often learned, although some are completely innate or nearly so. Infants pick up on cues from parents and caregivers about who is in-group and who is out-group, and race is a key marker the brain picks up on. Sapolsky comments:
Put simply, neurobiology, endocrinology, and developmental psychology all paint a grim picture of our lives as social beings. When it comes to group belonging, humans don’t seem too far from the families of chimps killing each other in the forests of Uganda: people’s most fundamental allegiance is to the familiar. Anything or anyone else is likely to be met, at least initially, with a measure of skepticism, fear, or hostility. In practice, humans can second-guess and tame their aggressive tendencies toward the Other. Yet doing so is usually a secondary, corrective step.

For all this pessimism, there is a crucial difference between humans and those warring chimps. The human tendency toward in-group bias runs deep, but it is relatively value-neutral. Although human biology makes the rapid, implicit formation of us-them dichotomies virtually inevitable, who counts as an outsider is not fixed. In fact, it can change in an instant.
Nationalism: The sorting trait applies to nationalism and globalism:
At its best, nationalism and patriotism can prompt people to pay their taxes and care for their nation’s have-nots, including unrelated people they have never met and will never meet. But because this solidarity has historically been built on strong cultural markers of pseudo-kinship, it is easily destabilized, particularly by the forces of globalization, which can make people who were once the archetypes of their culture feel irrelevant and bring them into contact with very different sorts of neighbors than their grand-parents had. Confronted with such a disruption, tax-paying civic nationalism can quickly devolve into something much darker: a dehumanizing hatred that turns Jews into “vermin,” Tutsis into “cockroaches,” or Muslims into “terrorists.” Today, this toxic brand of nationalism is making a comeback across the globe, spurred on by political leaders eager to exploit it for electoral advantage.

In the face of this resurgence, the temptation is strong to appeal to people’s sense of reason. Surely, if people were to understand how arbitrary nationalism is, the concept would appear ludicrous. Nationalism is a product of human cognition, so cognition should be able to dismantle it, too.

Yet this is wishful thinking. In reality, knowing that our various social bonds are essentially random does little to weaken them. . . . . The pull of us-versus-them thinking is strong even when the arbitrariness of social boundaries is utterly transparent, to say nothing of when it is woven into a complex narrative about loyalty to the fatherland. You can’t reason people out of a stance they weren’t reasoned into in the first place.
 Sapolsky argues that we could try to harness nationalist dynamics and not fight or condemn them. That would mean leaders need to avoid jingoism and xenophobia, and appeal to people’s innate in-group tendencies to socialize or incentivize cooperation and accountability. In this political scenario, nationalist pride is rooted in a country’s ability to do social good such as care for the elderly, teaching children empathy, and ensuring increased social mobility.

 Is America capable of trying to harness nationalism in some way akin to Sapolsky's suggestion?

B&B orig: 3/11/19

The Conservative Agenda Comes Out of the Dark

Thursday, March 21, 2019



The Washington Post has looked at what President Trump proposed in his 2020 budget. This is it:



 Domestic spending collapses and defense spending explodes. The budget proposes cutting (1) Medicare by $84.5 billion/year over 10 years, (2) Medicaid by $24.1 billion/year over 10 years, and (3) by $22 billion/year over 10 years Supplemental Nutrition Assistance Program (food stamps). It adds more than $33 billion to Defense, which totals $718 billion for 2020. At that level, defense spending amounts to 57% of the proposed federal discretionary budget.

 The conservative vision of governance could not be clearer. Trump's budget is not going to pass congress and that is not what it is intended to do. Instead, conservatism has finally grown a pair. It now has the balls to be brutally honest about how that ideology sees the federal government and spending priorities.

 That Trump promised not to touch Medicare and Medicaid in the 2016 election is not relevant or important to Trump, conservatives or pro-Trump populists. Conservatism apparently feels that now is the time to make an open run at the vision of America it has been working toward for at least the last 30 years. People will get a chance to approve or disapprove by their votes in the 2020 elections.

B&B orig: 3/12/19

The Rule of Law: Not Nearly as Objective as People Think

Thursday, March 21, 2019


A New York Times article, Old Rape Kits Finally Got Tested. 64 Attackers Were Convicted., reports that a push to test old rape kits is leading to convictions of attackers and rapists.
Ms. Sudbeck’s [rape] case is one of thousands that have gotten a second look from investigators since the Manhattan district attorney, Cyrus R. Vance Jr., committed $38 million in forfeiture money to help other jurisdictions test rape kits. Since the grants began being distributed in 2015, the evidence kits have led to 165 prosecutions in cases that were all but forgotten. So far, 64 of those have resulted in convictions.

Rarely have public dollars from a local prosecutor’s office been so directly tied to results with such national implications. The initiative has paid to get about 55,000 rape kits tested in 32 law enforcement agencies in 20 states, among them the police departments in Las Vegas, Philadelphia, Miami, Memphis, Austin, Tex., and Kansas City, Mo. Nearly half produced DNA matches strong enough to be added to the F.B.I.’s nationwide database of genetic profiles. About 9,200 of those matched with DNA profiles in the system, providing new leads and potential evidence.
Past failure to vindicate the rule of law by not testing rape kits is just one kind of subjectivity that suffuses the rule of law. It is a moral outrage and fairly common. Recently House Speaker Nancy Pelosi stated that it is not worth impeaching President Trump unless there is overwhelming evidence that could convince even congressional republicans. It is very likely that whatever evidence is available will not lead congressional republicans to vote to impeach Trump. Another congressional democrat commented that impeaching Trump can't be done unless there is a major public opinion shift to support impeachment. It is very likely that whatever evidence is available will not lead Trump supporters to want to see him impeached.

All of that makes impeachment a subjective exercise in partisan politics, not something based on the rule of law, evidence or logic. Convicted felon Paul Manafort received a 37-month sentence for 8 major felonies. Federal sentencing guidelines posited a 19-24 year sentence for what Manafort did. The federal judge in Manafort’s case was openly biased against and hostile to special counsel Mueller's prosecution of Manafort. In imposing the light sentence, the judge said that Manafort's life was “mostly blameless.” Since Manafort is a long term criminal, the judge’s sentence spared Manafort out of anger at Mueller, not based on the gravity of Manafort’s crimes. In this instance, the rule of law was almost purely subjective. It was heavily rigged in favor of white, white collar criminals.

  Some political philosophy on the Rule of Law concept: In a paper, Is The Rule Of Law An Essentially ContestedConcept (In Florida)?, a researcher analyzed how the 2000 election was treated by the courts. The paper comments:
For legal and political philosophers, one item of particular interest was the constant reference in public appeals of almost all the participants to the venerable ideal we call “the rule of law.” The references were legion, and often at odds with one another. This was true of every phase of the debacle. “One thing, however, is certain. Although we may never know with complete certainty the identity of the winner of this year's Presidential election, the identity of the loser is perfectly clear. It is the Nation's confidence in the rule of law.” (dissent in the Supreme Court 2000 decision in Bush v. Gore Vice President Gore took the high line that public criticism of the courts was precluded by the Rule of Law. Yet plainly, many on his side thought that in the circumstances they could do nothing better for the Rule of Law than to condemn the majority's decision as shameful.
The paper’s author, Jeremy Waldron, points out that even before the Bush v. Gore decision, theorists were inching toward the conclusion that the rule of law concept was meaningless. Quoting one theorist, Judith Shklar:
It would not be very difficult to show that the phrase “the Rule of Law” has become meaningless thanks to ideological abuse and general over-use. It may well have become just another one of those self-congratulatory rhetorical devices that grace the public utterances of Anglo-American politicians. No intellectual effort therefore need be wasted on this bit of ruling-class chatter.
Waldron goes on to write that on Shklar's view, invoking the Rule of Law as an authority is “incapable of driving one's argument very much further forward than the argument could have driven on its own. . . . . at the end of the day, many will have formed the impression that the utterance of those magic words meant precious little more than "Hooray for our side!” Despite Shklar’s harsh assessment, Waldron points out that there might be real value in trying to rationalize the rule of law concept. The urgent, important problem that Waldron describes is how to make the law rule instead of having men rule using the law as an excuse to get what they want. Waldron's paper is complex, but it boils down to trying to find a solution to the problem of rule by men instead of by law. I think there are avenues to at least try that, but outcomes are not knowable without the necessary experimentation. That is for a different discussion focused on that issue.

 For this discussion it is sufficient to assert that the Rule of Law related to political matters is often, maybe usually, as or more subjective (ideological or in-group vs out-group) than objective. That is a significant source of political and social polarization in American society, e.g., the 2015 Obergefell Supreme Court decision that legalized same-sex marriage. In turn, that polarization can arguably constitute an existential threat to liberal democracy and possibly modern civilization, and maybe even the fate of the human species. Fixing the Rule of Law to at least some non-trivial extent seems to be a critically important task on the road to trying to rationalize politics relative to what it is now. That assumes partial rationalization is possible. Political rationalization really has its work cut out for it.

B&B orig: 3/13/19

Wednesday, March 20, 2019

The Subtle Power of Propaganda

Wednesday, March 20, 2019

 In his short but mind blowing 1923 masterpiece on propaganda, Crystallizing Public Opinion, Edward Bernays describes his profession as a master propagandist. In his time, he was unsurpassed as a manipulator of mass public opinion. Bernays is considered by many historians to be one of the 100 most influential Americans of the 20th century. He, along with a few other masters of 'public relations' (Bernays invented the term) transformed America from a needs based society to a desires based society. Bernays and a couple of other propagandists working for the US government coaxed a reluctant America to enter into the mindless slaughter of World War I, successfully using the powerful propaganda line of making the world safe for democracy. German Nazis learned their propaganda techniques from Bernays. After he learned that the Nazis were using his techniques to control public opinion and persecute people, he wrote this: They were using my books as the basis for a destructive campaign against the Jews of Germany. This shocked me, but I knew any human activity can be used for social purposes or misused for antisocial ones.

 Bernays invented the term 'public relations' for propaganda after the Nazis made the phrase synonymous with lies, deceit, trickery, baseless emotional manipulation and authoritarianism. He went to his grave believing that using propaganda or public relations techniques to manipulate public opinion was for social good because the real goal of 'proper' propaganda is always social. History has proven that he was wrong about this and propaganda or public relations is still seen by many or most people as essentially antisocial, not essentially social. What Bernays taught the world was how to manipulate mass public opinion for any purpose, not merely for social good.

  Cigarettes and light bulbs: To sell more cigarettes, Bernays created an advertising campaign that made women who smoked in public seem to be empowered and independent creatures of wisdom and grace. He coined the phrase 'torches of freedom' for cigarettes and successfully made it socially acceptable for women to smoke in public. Obviously, conning both men and women into accepting women smoking in public did little or nothing to empower women, but that didn't matter. Cigarette sales skyrocketed, which was the only point of the ad campaign. Bernays also turned public opinion to acceptance of private ownership of electrical utilities after powerful individuals saw the vast amounts of money they could make by selling electricity themselves instead of having governments control the sales. Bernays' propaganda was a critical factor is establishing America's current capitalist vision of very powerful, very self-interested electrical utility companies. Opinions will differ as to how that has played out for the public interest in the last century or so.

  Politics, truth and propaganda: Despite his dubious claim that he used propaganda only for the public good, see cigarettes above, Bernays was no fool about exactly what terrors and mass slaughter it could unleash. Bernays, a nephew of Sigmund Freud, was acutely aware of the science of his time. He understood people's minds as well as anyone, and far better than most. His comments in Crystallizing Public Opinion make that clear. He wrote:
“It is manifestly impossible for either side in [a political] dispute to obtain a totally unbiased point of view as to the other side. . . . . The only difference between ‘propaganda’ and ‘education’, really, is in the point of view. The advocacy of what we believe in is education. The advocacy of what we don’t believe in is propaganda. . . . . Political, economic and moral judgments, as we have seen, are more often expressions of crowd psychology and herd reaction than the result of the calm exercise of judgment. . . . . Intolerance is almost inevitably accompanied by a natural and true inability to comprehend or make allowance for opposite points of view. . . . We find here with significant uniformity what one psychologist has called ‘logic-proof compartments.’ The logic-proof compartment has always been with us.”
His characterization of politics and truth strike this observer as stunningly accurate and deeply disturbing. Bernays clearly described in 1923 what is now the irrational and reality- and reason-untethered thing we call American politics in 2019. His comment on the relativity of truth depending on point of view is spot on.

And, when a modern American politician or business mogul gets in hot water over some scandal, their usual first impulse is to call out a public relations folks to formulate the spin and lies they will use against the public to cool the water down so they can stay in power and effectively argue they never did what they did. That spin & lie trick is playing out with a vengeance right now in Washington politics. Propaganda lives. Propaganda works. President Trump is living proof.

 A 4-hour documentary on Bernays life and influence is here: https://youtu.be/eJ3RzGoQC4s

Saturday, March 16, 2019

Does Absolute Free Speech Mean Fairness, Objectivity and Impartiality?

Saturday, March 16, 2019

  But it cannot be the duty, because it is not the right, of the state to protect the public against false doctrine. The very purpose of the First Amendment is to foreclose public authority from assuming a guardianship of the public mind through regulating the press, speech, and religion. In this field, every person must be his own watchman for truth, because the forefathers did not trust any government to separate the true from the false for us.” U.S. Supreme Court in Thomas v. Collins, 323 U.S. 516, 545 (1945)

 
Moderator message at the former Political Rhetoric Busters Disqus channel and its reincarnation as a Word Press blog

 Some people advocate absolute free speech or something close to it. Some may even want to remove limits on speech that incites imminent violence, is defamatory, child porn, and/or false advertising. It is the case that allowing all speech, except what can now be punished or proscribed, is tantamount to being fair, objective and impartial? If so, that means that dark free speech[1] is fair, objective and impartial.

 But on the other hand, facts, truths and logic are often bitterly contested. For example, people who deny that global warming is real or caused mostly by human activities disagree about the science, the data and its interpretation. They usually also attack the scientists as liars, incompetent, ignorant of basic science, and/ or enemies of the state. The two sides rely on different, incompatible sets of facts and logic. Minds do not change.

 The Supreme Court made it clear that because judges have no idea of how to separate honest from dishonest speech, the Constitution protects dark free speech as much as honest free speech.

 History, and cognitive and social sciences make it clear that dark speech is more persuasive than honest speech. Evolution hard-wired human brains to respond more strongly to threats and the negative emotions threat elicits. In practice, this means that dark speech is easily made to be stronger than honest speech, e.g., by lying, exaggerating and so forth. For example, President Trump’s claim that there is an emergency along the Mexico border is considered by most people to be a false alarm.[2] Nonetheless, that alarm is persuasive to many people, especially when people crossing the border are falsely portrayed as murdering, raping, pedophile narco terrorists.

  Ban the speaker: The political right often criticizes the left as intolerant of opposing speech. They point to instances where speakers on college campuses are disinvited to speak. The left responds that the speakers are socially damaging in various ways, e.g., they are liars, or they foment unwarranted fear, hate, intolerance, etc.

 In view of his past history of fomenting hate and racism, Australia canceled a visa for Milo Yiannopoulos to visit there. The Guardian reports: “Immigration minister David Coleman said on Saturday that comments about Islam made by Yiannopoulos in the wake of the Christchurch [New Zealand]massacre were ‘appalling and foment hatred and division’ and he would not be allowed in the country.”

 The shooter in the Christchurch New Zealand mass murder was explicit in his ‘manifesto’ that he was murdering to divide people about guns and he used social media to spread his message of racist rage and hate while he slaughtered innocents and showed it online in real time.

 Given history and human biology is it fair, objective and impartial to let people use dark free speech against the public? Or, because the courts have held there is no way to tell truth from lies, (1) allowing dark speech free reign is fair, objective and/or impartial, and (2) that’s the best that inherently flawed humans can do in view of their cognitive limitations?

Footnotes:
1. Dark free speech: lies, deceit, unwarranted emotional manipulation such as fomenting unwarranted fear, hate, anger, intolerance, bigotry or racism, unwarranted opacity to hide relevant facts or truths.

 2. “Numerous polls suggest Trump’s decision was popular among his Republican base. But his decision to use executive authority to fund a wall along the southern border is opposed by a clear majority of the public. That is reflected in six polls taken from early January to early March. By roughly a 2-to-1 margin, Americans oppose Trump’s decision to use emergency powers to build a border wall. That’s a wider margin than the Senate resolution to overturn Trump’s declaration of a national emergency, which passed 59 to 41.”

A pragmatic ideology

Original Biopolitics and Bionews post: September 3, 2106

 Current cognitive and social science of politics strongly suggests that humans generally have a very limited capacity to see unbiased reality or facts and apply unbiased common sense to the reality they think they see. The situation is complicated and multi-faceted. Evolution resulted in a human mental capacity that was at least sufficient for modern humans to survive the early days. Building existing human civilization has been based on about the same mental firepower our modern ancestors had. What evolution conferred was a mind that operates using (i) a high bandwidth unconscious mind or mental processes that can process about 11 million bits of information per second, and (ii) a very low bandwidth conscious mind that can process at most about 45-50 bits per second.



 Although our conscious mind believes it is aware of a great deal and is in control of decision-making and behavior, that perception of reality is more illusion than real. Our unconscious thinking exerts much more control over decision-making and behavior than we are aware of. Our conscious mind plays into the illusion. Unconscious innate biases, personal morals, social identity and political ideology all inject distortions into our perceptions of reality or facts and our application of common sense. Conscious reason acts primarily to rationalize or defend unconscious beliefs and rationales, even when they are wrong. False unconscious beliefs include a widespread fundamental misunderstanding of democracy. Our political thinking and behaviors are usually based on major disconnects with reality. Our unconscious mind is usually moralistic, self-righteous and intolerant. That creates a human social situation where “our righteous minds guarantee that our cooperative groups will always be cursed by moralistic strife.” Based on that description of the human condition, it's reasonable to believe that mostly irrational human politics cannot be made demonstrably more rational. That may or may not be true.

Some evidence that suggests that at least some people can operate with significantly less bias in perceiving reality and conscious reasoning. They are measurably more rational than average. The finding of superforecasters among average people and their mental traits suggests that politics might be partially rationalizable for at least some people, if not societies or nations as a whole. Research observations on how superforecasters improve over time, i.e., predict, get feedback, revise, and then repeat, there is reason to believe that evidence-based politics could be a route to better policy. Although the effort is in its infancy, there is some real-world evidence that cognitive science-based political policy can be simple but very successful. The trick is figuring a way to how to deal with personal morals, self-interest and other unconscious distortion sources that impedes politics based on less biased reality and common sense.



 If it’s possible to rationalize mainstream politics at all, accepting the reality of human cognition and behavior is necessary. There’s no point in denying reality and trying to propose false reality-based solutions. Given that, one needs to accept that (i) politics is fundamentally a matter of personal morals, ideology, and self- or group identity, and (ii) current political, economic, religious and/or philosophical moral sets or ideologies, e.g., liberalism, conservatism, capitalism, socialism, libertarianism, anarchy, etc, are fundamental to what makes people tick in terms of politics. One can argue that since existing ideological or moral frameworks have failed to rationalize politics beyond what it is now, and probably always has been, then a new moral or ideological framework is necessary (although maybe not sufficient).

Since morals are personal and they vary significantly among people, there’s no reason to believe that a set of morals or ideological principles cannot be conceived that could temper or significantly substitute for existing morals such as the care-harm moral foundation that tends to drive liberal perceptions and beliefs, or the loyalty-betrayal and other foundations that drives conservatives.

  How can one rationalize politics?: Why swim upstream if there’s a potential solution to be had by swimming downstream with the cognitive current? Morals or variants thereof that essentially everyone already claims to adhere to (even though science says that’s just not the case) seems like a good place to start. Most people (> 97% ?) of all political ideologies claim that they (i) work with unbiased facts, and (ii) unbiased common sense. And, most people believe that their politics and beliefs best serve the public interest (general welfare or common good). Few or no people say they rely on personally biased facts and common sense or that that’s the best way to do politics, although social science argues that that’s exactly how politics works for most people. 



  Three pragmatic morals: If that’s the case, then a set of three already widely accepted morals or political principles that might operate to rationalize politics to some extent without being rejected out of hand. They are (i) fidelity to less biased facts, and (ii) fidelity to less biased common sense, both of which (iii) are applied in service to the public interest.

  Service to the public interest: Service to the public interest means governance based on identifying a rational, optimum balance between serving public, individual and commercial interests based on an objective, fact- and logic-based analysis of competing policy choices, while (1) being reasonably transparent and responsive to public opinion, (2) protecting and growing the American economy, (3) fostering individual economic and personal growth opportunity, (4) defending personal freedoms and the American standard of living, (5) protecting national security and the environment, (6) increasing transparency, competition and efficiency in commerce when possible, and (7) fostering global peace, stability and prosperity whenever reasonably possible, all of which is constrained by (i) honest, reality-based fiscal sustainability that limits the scope and size of government and regulation to no more than what is needed and (ii) genuine respect for the U.S. constitution and the rule of law with a particular concern for limiting unwarranted legal complexity and ambiguity to limit opportunities to subvert the constitution and the law.

  As explained here, that conception of the public interest is broad. It reflects the reality that politics is a competition for influence and money among competing interests and ideologies, all of whom essentially always claim they want what’s best for the public interest. A broad conception encompasses concepts that fully engage all competing interests, morals and ideologies, e.g., (i) national security defense (a conservative moral or concern), (ii) concern for fostering peace and environmental protection (liberal) and (iii) defense of personal freedom (libertarian). Although broad, that public service conception is meaningfully constrained by the first two pragmatic morals, less biased fact and less biased common sense. For regular “subjective” or non-pragmatic politics, neither of those are powerful constraints on most people’s perceptions of reality or facts or their conscious thinking about politics. That’s not intended as a criticism of people’s approach to or thinking about politics. It’s intended to be a non-judgmental statement of fact based on research evidence: For politics, “. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not [intellectually] equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.” https://uploads.disquscdn.com/images/72344a6b7c17faaffe1763b324dbcbda3aa2425aa88a63f6779a42e00a4bd011.png

 In essence, what a broad conception service to the public interest does is to see it as the concept is bigger than special interests and bigger than everything shown on this map of morals-based politics. https://uploads.disquscdn.com/images/714643f01c9a0338188c3c7a72078ff702c1f6fee2fa198c04e1f82b5e4503cd.png In other words, the public interest is bigger than special interests and personal morals or ideologies.

  Criticisms: Many or most liberals, conservatives, libertarians and others will instantly jump all over this “political ideology” as nonsense. For example, how could such a broad conception of serving the public interest make one iota of difference in how political debate occurs now? That’s a good, reasonable question, the answer to which is already given in the discussion, i.e., fidelity to less biased fact and less biased common sense. The assumption is that in the long run, politics better grounded in reality and reason would make a difference for the better.

Many people who see a threat to their own beliefs and ideologies will reject that as nonsense. They already believe (know) that they employ unbiased fact and logic to politics, although the scientific evidence strongly argues that’s not true. Plenty of other criticisms can be raised. Some libertarians and/or conservatives might claim that this subverts personal freedoms and that the concept pays only lip service to defense of personal freedoms. In other words, this ideology seems at best meaningless or at worst a Trojan horse of some sort, e.g., a smoke screen for socialism, fascism and/or tyranny. From a pragmatic POV, it’s easy to see, understand and anticipate that reaction from people trapped in their standard subjective political ideologies, e.g., liberals, conservatives, libertarians, socialists, etc. What this conception does is it forces everyone and every ideology to (i) defend their policy choices on the basis of a less distorted world view and less biased common sense, and (ii) pay more than self-deluded and/or cynical lip service to serving the public interest. Everyone has to win arguments on less spun merits. For standard ideologues, that makes this brand of “pragmatic politics” an absolute nonstarter. It’s dead on arrival. That’s why politics based on these three political principles may be or actually is a new ideology. Who in their right mind would ever conceive of such a wacky thing? This won’t work for liberals, conservatives, libertarians, socialists or believers in any other existing ideology or set of morals I am aware of. To accept this set of political morals, one has to move away from existing mind sets and accept this proposal for what it is, i.e., advocacy of a cold, harsh competition in a brutal marketplace of less spun ideas and arguments based on less spun facts and realities. Some thought has gone into this. Here are responses to a list of criticisms to this three morals-based political ideology.



  Questions: Does proposing a three morals-based pragmatic political ideology make any sense? Is it too utopian to be a reasonable means to partially rationalize politics? Could it ever appeal to more than just a few people? What has been overlooked in the morals or articulation of the public interest? What’s the fatal flaw(s) in the underlying reality and/or rationale? Do the existence of superforecasters provide a template for social change, or are those people intellectual freaks or flukes that cannot guide widespread social change?

 Is it pointless to even discuss such an approach to politics because people will never allow, or as David Hume argued in the 18th century, are incapable of subjugating standard personal moral foundations (their passions) to facts and logic? Would it matter if many people, say 4-5% of adult Americans, did adopt this pragmatic mind set, e.g., they formed a vocal group or tribe that young people could identify with and begin to adopt in lieu of never having to switch from any existing standard ideology?

Book notes: Superforecasting - The Art And Science Of Prediction

Original Biopolitics and Bionews post: September 1, 2016

  Philip E. Tetlock - Superforecasting: The Art and Science of Prediction, Crown Books, 2015

Notes: System 1 refers to out powerful unconscious thinking and the biases and morals that shape it (Johnathan Haidt's "elephant")
System 2 refers to our weak conscious thinking, "reason" or "common sense", including the unconscious biases that are embedded in it (Haidt's "rider")
  Foxes: People having a relatively open minded mind set (described in Tetlock's first book, Expert Political Judgment: How Good Is it? How Can We know?)
  Hedgehogs: People with a more closed mind set

  Book notes Chapter 1: An optimistic skeptic
p. 3: regarding expert opinions, there is usually no accurate measurement of how good they are, there are “just endless opinions - and opinions on opinions. And that is business as usual.”; the media routinely delivers, or corporations routinely pay for, opinions that may be accurate, worthless or in between and everyone makes decisions on that basis
 p. 5: talking head talent is skill in telling a compelling story, which is sufficient for success; their track record isn’t irrelevant - most of them are about as good as random guessing; predictions are time-sensitive - 1-year predictions tend to beat guessing more than 5- or 10-year projections
 p. 8-10: there are limits on what is predictable → in nonlinear systems, e.g., weather patterns, a small initial condition change can lead to huge effects (chaos theory); we cannot see very far into the future (maybe 18-36 months?)
 p. 13-14: predictability and unpredictability coexist; a false dichotomy is saying the weather is unpredictable - it is usually relatively predictable 1-3 days out, but at days 4-7 accuracy usually declines to near-random; weather forecasters are slowly getting better because they are in an endless forecast-measure-revise loop ("perpetual beta" mode); prediction consumers, e.g., governments, businesses and regular people, don’t demand evidence of accuracy, so it isn’t available, and that means no revision, which means no improvement
 p. 15: Bill Gate’s observation: surprisingly often a clear goal isn’t specified so it is impossible to drive progress toward the goal; that is true in forecasting; some forecasts are meant to (1) entertain, (2) advance a political agenda, or (3) reassure the audience their beliefs are correct and the future will unfold as expected (this kind is popular with political partisans)
 p. 16: the lack of rigor in forecasting is a huge social opportunity; to seize it (i) set the goal of accuracy and (ii) measure success and failure
 p. 18: the Good Judgment Project found two things, (1) foresight is real and some people have it and (2) it isn’t strictly a talent from birth - (i) it boils down to how people think, gather information and update beliefs and (ii) it can be learned and improved
 p. 21: from a 1954 book - analysis of 20 studies showed that algorithms based on objective indicators were better predictors than well-informed experts; more than 200 later studies have confirmed that and the conclusion is simple - if you have a well-validated statistical algorithm, use it
 p. 22: machines may never be able to beat talented humans, so dismissing human judgment as just subjective goes too far; maybe the best that can be done will come from human-machine teams, e.g., Garry Kasparov and Deep Blue together against a machine or a human
 p. 23: quoting David Ferrucci, IBM’s Watson’s chief engineer is optimistic: “‘I think it’s going to get stranger and stranger’ for people to listen to the advice of experts whose views are informed only by their subjective judgment.”; Tetlock: “. . . . we will need to blend computer-based forecasting and subjective judgment in the future. So it’s time to get serious about both.”

  Chapter 2: Illusions of knowledge
p. 25: regarding a medical diagnosis error: “We all been too quick to make up our minds and too slow to change them. And if we don’t examine how we make these mistakes, we will keep making them. This stagnation can go on for years. Or a lifetime. It can even last centuries, as the long and wretched history of medicine illustrates.”

 p. 30: “It was the absence of doubt - and scientific rigor - that made medicine unscientific and caused it to stagnate for so long.”; it was an illusion of knowledge - if the patient died, he was too sick to be saved, but if he got better, the treatment worked - there was no controlled data to support those beliefs; for decades, physicians resisted the idea of randomized, controlled trials as proposed in 1921 because they (falsely) believed their subjective judgments revealed the truth

 p. 35: on Daniel Khaneman’s (Nobel laureate) fast System 1: “A defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere.” - context - instantly running away from a Paleolithic shadow that might be a lion; Khaneman calls these tacit assumptions or biases WYSIATI (what-you-see-is-all-there-is); system 1 judgments take less than 1 sec. - there’s no time to think about things; regarding coherence: “. . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”

 p. 38-39: confirmation bias: (i) seeking evidence to support the 1st plausible explanation, (ii) rarely seeking contradictory evidence and (iii) being a motivated skeptic in the face of contrary evidence and finding even weak reasons to denigrate it or reject it entirely, e.g., a doctor’s belief that a quack medical treatment works for all but the incurable

 p. 40: attribute substitution, availability heuristic or bait and switch: one question may be difficult or unanswerable without more info, so the unconscious System 1 substitutes another, easier, question and the easy question’s answer is the same as the hard question’s answer, even when it is wrong; CLIMATE CHANGE EXAMPLE: people who cannot figure out climate change on their own substitute what most climate scientists believe for their own belief - it can be wrong (Me: it can also be right -- how does the non-expert assess technology beyond one's capacity to eavaluate it?)

 p. 41-42: “The instant we wake up and look past the tip of our nose, sights and sounds flow into the brain and System 1 is engaged. This system is subjective, unique to each of us.”; cognition is a matter of blending inputs from System 1 and 2 - in some people, System 1 has more dominance than in others; it is a false dichotomy to see it as System 1 or System 2 operating alone; pattern recognition: System 1 alone can make very good or bad snap judgments and the person may not know why - bad snap judgment or false positive = seeing the Virgin Mary in burnt toast (therefore, slowing down to double check intuitions can help)

 p. 44: tip of the nose perspective is why doctors did not doubt their own beliefs for thousands of years (ME: and that kept medical science mostly in the dark ages until after the end of WWII) https://uploads.disquscdn.com/images/1de3df9f1d005c58643da4b5045c1e668e4a824d04b824d3b6f83eb68d6119e2.jpg

  Chapter 3: Keeping Score
p. 48: it is not unusual that a forecast that may seem dead right or wrong really cannot be “conclusively judged right or wrong”; details of a forecast may be absent and the forecast can’t be scored, e.g., no time frames, geographic locations, reference points, definition of success or failure, definition of terms, a specified probability of events (e.g., 68% chance of X) or lack thereof or many comparison forecasts to assess the predictability of what is being forecasted;
 p. 53: “. . . . vague verbiage is more the rule than the exception.”
 p. 55: security experts were asked what the term “serious possibility” meant in a 1951 National Intelligence Estimate → one analyst said it meant 80 to 20 (4 times more likely than not), another said it meant 20 to 80 and others said it was in between those two extremes → ambiguous language is ~useless, maybe more harmful than helpful
 p. 50-52: national security experts had views split along liberal and conservative lines about the Soviet Union and future relations; they were all wrong and Gorbachev came to power and de-escalated nuclear and war tensions; after the fact, all the experts claimed they could see it coming all along; “But the train of history hit a curve, and as Karl Marx once quipped, the intellectuals fall off.”; the experts were smart and well-informed, but they were just misled by System 1’s subjectivity (tip of the nose perspective)
 p.58-59: the U.S. intelligence community resisted putting definitions and specified probabilities in their forecasts until finally, 10 years after the WMD fiasco with Saddam Hussein, the case for precision was so overwhelming that they changed; “But hopelessly vague language is still so common, particularly in the media, that we rarely notice how vacuous it is. It just slips by.”
p. 60-62: calibration: perfect calibration = X% chance of an event when past forecasts have always been “there is a X% chance” of the event, e.g., rainfall; calibration requires many forecasts for the assessment and is thus impractical for rare events, e.g., presidential elections; underconfidence = prediction is X% chance, but reality is a larger X+Y% chance; overconfidence = prediction is X% chance, but reality is a smaller X-Y% chance
 p. 62-66: the two facets of good judgment are captured by calibration and resolution; resolution: high resolution occurs when predictions of low < ~ 20% or high > ~80% probability events are accurately predicted; accurately predicting rare events gets more weight than accurately predicting more common events; a low Brier score is best, 0.0 is perfect, 0.5 is random guessing and 2.0 is getting all or none, or yes or no, predictions wrong 100% of the time; however a score of 0.2 in one circumstance, e.g., weather prediction in Phoenix, AZ looks bad, while a score of 0.2 in Springfield MO is great because the weather there is far less predictable than in Phoenix; apples-to-apples comparisons are necessary, but that kind of data usually doesn’t exist (Me: society is dismally data-poor)
 p. 68: In Expert Political Judgment (Tetlock's first book), the bottom line was that some experts were marginally better than random guessing - the common characteristic was how they thought, not their ideology, Ph.D. or not, or access to classified information; the typical expert was about as good as random guessing and their thinking was ideological; “They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” when piling up reasons why they were right and others were wrong. As a result, they were confident to declare things “impossible” or “certain.” Committed to their conclusions, they were reluctant to change their minds even when their predictions clearlyfailed. They would tell us, ‘Just wait.’”
 p. 69: “The other group consisted of more pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. . . . . They talked about possibilities and probabilities, not certainties.”
 p. 69: “The fox knows many things but the hedgehog knows one big thing. . . . . Foxes beat hedgehogs on both calibration and resolution. Foxes had real foresight. Hedgehogs didn’t. . . . . How did hedgehogs manage to do slightly worse than random guessing?”; hedgehog example is CNBC’s Larry Kudlow and his supply side economics Big Idea in the face of the 2007 recession
 p. 70-72: on Kudlow: “Think of that Big Idea as a pair of glasses that the hedgehog never takes off. . . . And, they aren’t ordinary glasses. They are green-tinted glasses . . . . Everywhere you look, you see green, whether it’s there or not. . . . . So the hedgehog’s one Big Idea doesn’t improve his foresight. It distorts it.”; more information helps increase hedgehog confidence, not accuracy; “Not that being wrong hurt Kudlow’s career. In January 2009, with the American economy in a crisis worse than any since the Great Depression, Kudlow’s new show, The Kudlow Report, premiered on CNBC. That too is consistent with the Expert Political Judgment data, which revealed an inverse correlation between fame and accuracy: the more famous an expert was, the less accurate he was.”; “As anyone who has done media training knows, the first rule is keep it simple, stupid. . . . . People tend to find uncertainty disturbing and “maybe” underscores uncertainty with a bright red crayon. . . . . The simplicity and confidence of the hedgehog impairs foresight, but it calms nerves - which is good for the careers of hedgehogs. . . . Foxes don’t fare so well in the media. . . . This aggregation of many perspectives is bad TV.”
 p. 73: an individual who does a one-off accurate guess is different from people who do it consistently; consistency is based on aggregation, which is the recognition that useful info is widely dispersed and each bit needs a separate weighting for importance and relevance p 74: on information aggregation: “Aggregating the judgments of people who know nothing produces a lot of nothing.”(Hm - what about Disqus channels that demand that all voices and POVs be heard, informed or not?); the bigger the collective pool of accurate information, the better the prediction or assessment; Foxes tend to aggregate, Hedgehogs don’t
 p. 76-77: aggregation: looking at a problem from one perspective, e.g., pure logic can lead to an incorrect answer; multiple perspectives are needed; using both logic and psycho-logic (psychology or human cognition) helps; some people are lazy and don’t think, some apply logic to some degree and then stop, while others pursue logic to its final conclusion → aggregate all of those inputs to arrive at the best answer; “Foxes aggregate perspectives.” P 77-78: on human cognition - we don’t aggregate perspectives naturally: “The tip-of-your nose perspective insists that it sees reality objectively and correctly, so there is no need to consult other perspectives.”
 p. 79-80: on perspective aggregation: “Stepping outside ourselves and really getting a different view of reality is a struggle. But Foxes are likelier to give it a try.”; people’s temperament fall along a spectrum from the rare pure Foxes to the rare pure Hedgehogs; “And our thinking habits are not immutable. Sometimes they evolve without out awareness of the change. But we can also, with effort, choose to shift gears from one mode to another.”

  Chapter 4: Superforecasters
p. 84-85: the U.S. intelligence community (IC) is, like every huge bureaucracy (about 100,000 people, about $50 billion budget), very change-resistant - they saw and acknowledged their colossal failure to predict the Iranian revolution, but did little or nothing to address their dismal capacity to predict situations and future events; the WMD-Saddam Hussein disaster 22 years later finally inflicted a big enough shock to get the IC to seriously introspect p 88 (book review comment): my Intelligence Advanced Research Projects Agency work isn’t as exotic as Defense Advanced Research Projects Agency, but it can be just as important
 p. 89: humans “will never be able to forecast turning points in the lives of individuals or nations several years into the future - and heroic searches for superforecasters won’t change that.”; the approach: “Quit pretending you know things you don’t and start running experiments.” (ME: the argument for evidence-based politics)
 p. 90-93: the shocker: although the detailed result is classified (it’s gov’t funded IARPA research, Good Judgment Project (GJP: https://www.gjopen.com/) volunteers who passed screening and used simple algorithms but without access to classified information beat government intelligence analysts with access to classified information; one contestant (a retired computer programmer) had a Breier score of 0.22, 5th highest among 2,800 GJP participants and then in a later competition among the best forecasters, his score increased to 0.14, top among the initial group of 2,800 → he beat the commodities futures markets by 40% and the “wisdom of the crowd” control group by 60% (ME: hire this person and get rich)
 p. 94-95: the best forecasters got things right at 300 days out more than regular forecasters looking out 100 days and that improved over the 4-year GJP experiment: “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”
 p. 96: on IARPA’s willingness to critically self-assess after the WMD disaster in Iraq: “And yet, IARPA did just that: it put the intelligence community’s mission ahead of the people inside the intelligence community - at least ahead of those insiders who didn’t want to rock the bureaucratic boat.”
 p. 97-98: “But it’s easy to misinterpret randomness. We don’t have an intuitive feel for it. Randomness is invisible from the tip-of-your-nose perspective. We can see it only if we step outside of ourselves.”; people can be easily tricked into believing that they can predict entirely random outcomes, e.g., guessing coin tosses; “. . . . delusions of this sort are routine. Watch business news on television, where talking heads are often introduced with a reference to one of their forecasting references . . . . And yet many people takes these hollow claims seriously.” (bloviation & blither sells) 
p. 99: “Most things in life involve skill and luck, in varying proportions.”
 p. 99-101: regression to the mean cannot be overlooked and is a necessary tool for testing the role of luck in performance → regression is slow for activities dominated by skill, e.g., forecasting, and fast for activities dominated by chance/ randomness, e.g., coin tossing
 p. 102-103: a key question is how did superforecasters hold up across the years? → in years 2 and 3, superforecasters were the opposite of regressing to the mean -- they got better; sometimes causal connections are nonlinear and thus not predictable and some of that had to be present among the variables that affected what the forecasters were facing → there should be some regression unless an offsetting process is increasing forecasters’ performance; there is some regression - about 30% of superforecasters fall out of the top 2% each year but 70% stay in - individual year-to-year correlation is about 0.65, which is pretty high, i.e., about 1 in 3 → Q: Why are these people so good?

  Chapter 5: Supersmart? p. 114: Fermi-izing questions, breaking a question into relevant parts, can allow better guesses, e.g., how many piano tuners are there in Chicago → guess total population, guess total # pianos & time to tune one piano & hours/year a tuner works → that technique usually helps increase accuracy a lot, even when none of the numbers are known; Fermi-izing tends to defuse the unconscious System 1’s tendency to bait & switch the question; EXAMPLE: would testing of Arafat’s body 6 years after his death reveal the presence of Polonium (Po), which is allegedly what killed him? → Q1 - can you even detect Po 6 years later? Q2: if Po is still detectable, how could it have happened, e.g., Israel, Palestinian enemies before or after his death → for this question the outside view, what % of exhumed bodies are found to be poisoned is hard to (i) identify and (ii) find the answer to, but identifying it is most important, i.e., it’s not certain (< 100%, say 80%), but it has to be more that trivial evidence otherwise authorities would not allow his body to be exhumed (> 20%) → use the 20-80% halfway point of 50% as the outside view, then adjust probability up or down based on research and the inside or intuitive System 1 view → that’s using a blend of unconscious intuition plus conscious reason → personal political ideology has little or nothing to do with it p.118: superforecasters look at questions 1st from Khaneman’s “outside view”, i.e., the statistical or historical base rate or norm (the anchor) and then 2nd use the inside view to adjust probabilities up or down → System 1 generally goes straight to the comfortable but often wrong inside view and ignores the outside view; will there be a Vietnam-China border clash start in the next year -- the 1st (outside) view asks how many clashes there have been over time, e.g., once every 5 years, and then merged in the 2nd view of current Vietnam-China politics to adjust the baseline probability up or down p. 120: the outside view has to come first; “And it’s astonishingly easy to settle on a bad anchor.”; good anchors are easier to find from the outside view than from the inside p. 123-124: some superforecasters kept explaining in the Good Judgment Project online forum how they approached problems, what their thinking was and asking for criticisms, i.e., they were looking for other perspectives; simply asking if a judgment is wrong tends to lead to improvement in the first judgment; “The sophisticated forecaster knows about confirmation bias and will seek out evidence that cuts both ways.” p. 126: “A brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.” p. 127: “For superforecasters, beliefs are hypothesis to be tested, not treasures to be guarded.”

  Chapter 6: Superquants? p. 128-129: most superforecasters are good at math, but mostly they rely on subjective judgment: one super said this: “It’s all, you know, balancing, finding relevant information and deciding how relevant is this really?”; it’s not math skill that counts most - its nuanced subjective judgment p. 138-140: we crave certainty and that’s why Hedgehogs and their confident yes or no answers on TV are far more popular and comforting than Foxes with their discomforting “on the one hand . . . but on the other” style; people equate confidence with competence; “This sort of thinking goes a long way to explaining why so many people have a poor grasp of probability. . . . The deeply counterintuitive nature of statistics explains why even very sophisticated people often make elementary mistakes.” A forecast of a 70% chance of X happening means that there is a 30% chance it won’t - that fact is lost on most people → most people translate an 80% of X to mean X will happen and that just isn’t so; only when probabilities are closer to even, maybe about 65:35 to 34:65; (p. 144), does the translation for most people become “maybe” X will happen, which is the intuitively uncomfortable translation of uncertainty associated with most everything p. 143: superforecasters tend to be probabilistic thinkers, e.g., Treasury secy Robert Rubin; epistemic uncertainty describes something unknown but theoretically knowable, while aleatory uncertainty is both unknown and unknowable p. 145-146: superforecasters who use more granularity, a 20, 21 or 22% chance of X tended to be more accurate than those who used 5% increments and they tended to be more accurate than those who used 10% increments, e.g., 20%, 30% or 40%; when estimates were rounded to the nearest 5% or 10%, the granular best superforecasters fell into line with all the rest, i.e., there was real precision in those more granular 1% increment predictions p. 148-149: “Science doesn’t tackle “why” questions about the purpose of life. It sticks to “how” questions that focus on causation and probabilities.”; “Thus, probabilistic thinking and divine-order thinking are in tension. Like oil and water, chance and fate do not mix. And to the extent we allow our thoughts to move in the direction of fate, we undermine our ability to think probabilistically. Most people tend to prefer fate.” p. 150: the sheer improbability of something that does happen, you meet and marry your spouse, is often attributed to fate or God’s will, not the understanding that sooner or later many/most people get married to someone at some point in their lives; the following psycho-logic is “incoherent”, i.e., not logic: (1) the chance of meeting the love of my life was tiny, (2) it happened anyway, (3) therefore it was meant to be and (4) therefore, the probability it would happen was 100% p. 152: scoring for tendency to accept or reject fate and accept probabilities instead, average Americans are mixed or about 50:50, undergrads somewhat more biased toward probabilities and superforecasters are the most grounded in probabilities, while rejecting fate as an explanation; the more inclined a forecaster is to believe things are destined or fate, the less accurate their forecasts were, while probability-oriented forecasters tended to have the highest accuracy → the correlation was significant

  Chapter 7: Supernewsjunkies? p. 154-155: based on news flowing in, superforecasters tended to update their predictions and that tended to improve accuracy; it sn’t just a matter of following the news and changing output from sufficient new input - their initial forecasts were 50% more accurate than regular forecasters p. 160: belief perseverance = people “rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.” → extreme obstinacy, e.g., the fact that something someone predicted didn’t happen is taken as evidence that it will happen p. 161-163: on underreacting to new information: “Social psychologists have long known that getting people to publicly commit to a belief is a great way to freeze it in place, making it resistant to change. The stronger the commitment, the greater the resistance.”; perceptions are a matter of our “identity”; “. . . . people’s views on gun control often correlate with their views on climate change, even though the two issues have no logical connection to each other. Psycho-logic trumps logic.”; “. . . . superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.”; consider “career CIA analysts or acclaimed pundits with their reputations on the line.” (my observation: once again, ego rears its ugly head and the output is garbage - check your ego at the door) p. 164: on overreacting to new information: dilution effect = irrelevant or noise information can and often does change perceptions of probability and that leads to mistakes; frequent forecast updates based on small “units of doubt” (small increments) and that seems to tend to minimize overreacting and underreacting; balancing new information with the info that drive the original or earlier updates captures the value of all the information p. 170: Baye’s theorem: new/updated belief/forecast = prior belief x diagnostic value of the new information; most superforecasters intuitively understand Baye’s theorem, but can’t write the equation down nor do they actually use it, instead they use the concept and weigh updates based on the value of new information   

  Chapter 8: Perpetual Beta p. 174-175: two basic mindsets - the growth mindset is that you can learn and grow through hard work; the fixed mindset holds that you have what you were born with and that innate talents can be revealed but not created or developed, e.g., fixed mindsetters say things like, e.g., “I’m bad at math”, and it becomes a self-fulfilling prophecy; fixed mindset children given harder puzzles give up and lose interest, while growth mindset kids loved the challenge because for them, learning was a priority p. 178: consistently inconsistent - John Maynard Keynes: engaged in an endless cycle of try, fail, analyze, adjust, try again; he retired wealthy from his investing, despite massive losses from the great depression and other personal blunders; skills improve with practice p. 181-183: prompt feedback on forecasts is necessary for improvement, but it’s usually lacking - experience alone doesn’t compensate - experienced police gain confidence that they are good at spotting liars, but it isn’t true because they don’t improve with time; most forecasters get little or no feedback because (1) their language is ambiguous and their forecasts are thus not precise enough to evaluate - self-delusion is a real concern and (2) there’s usually a long time lag between a forecast and the time needed to get feedback on success or failure - with time a person forgets the details of their own forecasts and hindsight bias distorts memory, which makes it worse; vague language is elastic and people read into it what they want; hindsight bias = knowing theoutcome of an event and that distorts our perception of what we thought we knew before the outcome; experts succumb to it all the time, e.g., prediction of loss of communist power monopoly in the Soviet Union before it disintegrated in 1991 and after it happened → expert recall was 31% higher than their original estimate (= hindsight bias) p. 190: “Superforecasters are perpetual beta.” - they have the growth mind set p. 191-192: list of superforecaster tendencies Philosophic outlook: Cautious - things are uncertain; Humble - reality is infinitely complex; Nondeterministic - what happens isn’t meant to be and doesn’t have to happen Ability & thinking style: Actively open-minded - beliefs are hypotheses to be tested, not treasures to be protected; Intelligent, knowledgeable & have a need for cognition (conscious thinking) - intellectually curious, like puzzles and challenges Forecasting methods: Pragmatic - not wedded to any idea or agenda; Analytical - can step back from tip-of-nose view and consider other views; Dragonfly-eyed - value diverse views and synthesize them into their own; Probabilistic - judge using many grades or degrees of maybe or chance; Thoughtful updaters - change their minds when facts change; Good intuitive psychologists - aware of the value of checking personal thinking for cognitive and emotional biases Work ethic: Have a growth mindset - believe it’s possible to improve; Have grit - determined to keep at it however long it takes Superforecaster traits vary in importance: perpetual beta mode is important → the degree to which supers value updating and self-improvement (growth mindset) is a predictor 3 times more powerful than the next best predictor, intelligence

  Chapter 9: Superteams p. 201: success can lead to mental habits that undermine the mental habits that led to success in the first place; on the other hand, properly functioning teams can foster dragonfly-eyed perspectives and thinking, which can improve forecasting p. 208-209: givers on teams are not chumps - they tend to make the whole team perform better; it is complex and it will take time to work out the psychology of groups - replicating this won’t be easy in the real world; “diversity trumps ability” may be true due to the different perspectives a team can generate or, maybe it’s a false dichotomy and a shrewd mix of ability and diversity is the key to optimum performance

  Chapter 10: The Leader’s Dilemma p. 229-230: Tetlock uses the German Wehrmacht as an example of how leadership and judgment can be effectively combined, even though it served an evil end → the points being that (i) even evil can operate intelligently and creatively so therefore don’t underestimate your opponent and (ii) seeing something as evil and wanting to learn from it presents no logical contradiction but only a psychological envsion that superforecasters overcome because they will learn from anyone or anything that has information or lessons of value

  Chapter 11: Are They really So Super? p. 232-233: in a 2014 interview Gen. Michael Flynn, Head of DIA (DoD’s equivalent of the CIA; 17,000 employees) said “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” but googling the phrase “global conflict trends” says otherwise; Flynn, like Peggy Noonan and her partisan reading of political events, suffered from the mother of all cognitive illusions, WYSIATI (what-you-see-is-all-there-is) → every day for three hours, Flynn saw nothing but reports of conflicts and bad news; what is important is the fact that Flynn, a highly accomplished and intelligent operative fell for the most obvious illusion there is → even when we know something is a System 1 cognitive illusion, we sometimes cannot shut it off and see unbiased reality, e.g., Müller-Lyer optical illusion (two equal lines, one with arrow ends pointing out and one with ends pointing in - the in-pointing arrow line always looks longer, even when you know it isn’t) p. 234-237: “. . . . dedicated people can inoculate themselves to some degree against certain cognitive illusions.”; scope insensitivity is a major illusion of particular importance to forecasters - it is another bait & switch bias or illusion where a hard question is unconsciously substituted with a simpler question, e.g., the average amount groups of people would be willing to pay to avoid 2,000, 20,000 or 200,000 birds drowning in oil ponds was the same for each group, $80 in increased taxes → the problem’s scope recedes into the background so much that it becomes irrelevant; the scope insensitivity bias or illusion (Tetlock seems to use the terms interchangeably) is directly relevant to geopolitical problems; surprisingly, superforecasters were less influenced by scope insensitivity than average forecasters - scope sensitivity wasn’t perfect, but it was good (better than Khaneman guessed it would be); Tetlock’s guess → superforecasters were skilled and persistent in making System 2 corrections of System 1 judgments, e.g., by stepping into the outside view, which dampens System 1 bias and/or ingrains the technique to the point that it is “second nature” for System 1
 p. 237-238: CRITICISM: how long can superforecasters defy psychological gravity?; maybe a long time - one developed software designed to correct System 1 bias in favor of the like-minded and that helped lighten the heavy cognitive load of forecasting; Nassim Taleb’s Black Swan criticism of all of this is that (i) rare events, and only rare events, change the course of history and (ii) there just aren’t enough occurrences to judge calibration because so few events are both rare and impactful on history; maybe superforecasters can spot a Black Sawn and maybe they can’t - the Good Judgment Project (GJP) wasn’t designed to ask that question
 p. 240-241, 244: REBUTTAL OF CRITICISM: the flow of history flows from both Black Swan events and from incremental changes; if only Black Swans counted, the GJP would be useful only for short-term projections and with limited impact on the flow of events over long time frames; and, if time frames are drawn out to encompass a Black Swan, e.g., the one-day storming of the Bastille on July 14, 1789 vs. that day plus the ensuing 10 years of the French revolution, then such events are not so unpredictable - what’s the definition of a Black Swan?; other than the obvious, e.g., therewill be conflicts, predictions 10 years out are impossible because the system is nonlinear p. 245: “Knowing what we don’t know is better than thinking we know what we don’t.”; “Khaneman and other pioneers of modern psychology have revealed that our minds crave certainty and when they don’t find it, they impose it.”; referring to experts’ revisionist response the unpredicted rise of Gorbachev: “In forecasting, hindsight bias is the cardinal sin.” - hindsight bias not only makes past surprises seem less surprising, it also fosters belief that the future is more predictable than it is

  Chapter 12: What’s Next? p. 251: “On the one hand, the hindsight-tainted analyses that dominate commentary after major events are a dead end. . . . . On the other hand, our expectations of the future are derived from our mental models of how the world works, and every event is an opportunity to learn and improve those models.”; the problems is that “effective learning from experience can’t happen without clear feedback, and you can’t have clear feedback unless your forecasts are unambiguous and scoreable.” p. 252: “Vague expressions about indefinite futures are not helpful. Fuzzy thinking can never be proven wrong. . . . . Forecast, measure, revise: it is the surest path to seeing better.” - if people see that, serious change will begin; “Consumers of forecasting will stop being gulled by pundits with good stories and start asking pundits how their past predictions fared - and reject answers that consist of nothing but anecdotes and credentials. And forecasters will realize . . . . that these higher expectations will ultimately benefit them, because it is only with the clear feedback that comes with rigorous testing that they can improve their foresight.” p. 252-253: “It could be huge - an “evidence-based forecasting” revolution similar to the “evidence-based medicine” revolution, with consequences every bit as significant.”
 p. 253: IS IMPROVEMENT EVEN POSSIBLE?: nothing is certain: “Or nothing may change. . . . . things may go either way.”; whether the future will be the “stagnant status quo” or change “will be decided by the people whom political scientists call the “attentive public. I’m modestly optimistic.” (Question: is this a faint glimmer of hope that politics can be partially rationalized on the scale of individuals, groups, societies, nations and/or the whole human species?) p. 254-256: one can argue that the only goal of forecasts is to be accurate but in practice, there are multiple goals - in politics the key question is - Who does what to whom? - people lie because self and tribe matter and in the mind of a partisan (Dick Morris predicting a Romney landslide victory just before he lost is the example Tetlock used - maybe he lied about lying) lying to defend self or tribe is justified because partisans want to be the ones doing whatever to the whom; “If forecasting can be co-opted to advance their interests, it will be.” - but on the other hand, the medical community resisted efforts to make medicine scientific but over time persistence and effort paid off - entrenched interests simply have to be overcome (another faint glimmer of hope?)
 p. 257: Tetlock's focus: “Evidence-based policy is a movement modeled on evidence-based medicine, with the goal of subjecting government policies to rigorous analysis so that legislators will actually know - not merely think they know - whether policies do what they are supposed to do.”; “. . . . there is plenty of evidence that rigorous analysis has made a real difference in government policy.”; analogies exist in philanthropy (Gates Foundation) and sports - evidence is used to feed success and curtail failure p. 262-263: “What matters is the big question, but the big question can’t be scored.”, so ask a bunch of relevant small questions - it’s like pointillism painting - each dot means little but thousands of dots create a picture; clusters of little questions will be tested to see if that technique can shed light on big questions p. 264-265: elements of good judgment include foresight and moral judgment, which can’t be run through an algorithm; asking the right questions may not be the province of superforecasters - Hedgehogs often seem to come up with the right questions - the two mindsets needed for excellence may be different
 p.266: the Holy Grail of my research: “. . . . using forecasting tournaments to depolarize unnecessarily polarized policy debates and make us collectively smarter.” (Tetlock sees a path forward, but doesn’t aggressively generalize it to all of politics, including the press-media → this is a clear step toward “rational” politics) p. 269: adversarial but constructive collaboration requires good faith; “Sadly, in noisy public arenas, strident voices dominate debates, and they have zero interest in adversarial collaboration. . . . But there are less voluble and more reasonable voices. . . . . let them design clear tests of their beliefs. . . . . When the results run against their beliefs, some will try to rationalize away the facts, but they will pay a reputational price. . . . . All we have to do is get serious about keeping score.”

 GJP-related websites: www.goodjudgement.com https://www.gjopen.com/ 
http://edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii