Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Saturday, August 10, 2019

Science: The Earliest Known Example of ‘Modern Cognition

What? This doesn’t look like much of anything

Nature, probably the world’s top science journal, has published a paper believed to reveal evidence of the earliest known human abstract drawing. The drawing dates to the Middle Stone Age, about 73,000 years ago. One researcher commented that this find is interpreted as “a prime indicator of modern cognition.” The rock fragment has cross-hatch lines sketched onto stone with red ochre pigment.

The paper’s abstract commented: “This notable discovery pre-dates the earliest previously known abstract and figurative drawings by at least 30,000 years. This drawing demonstrates the ability of early Homo sapiens in southern Africa to produce graphic designs on various media using different techniques.” Although scientists have found older drawings, this research indicates the lines on this stone mark the first abstract drawing, an indicator of abstract thinking.

Extrapolation of the lines on the rock fragment is interpreted to be an abstract drawing. The fragment was analyzed to be coarse-grained silcrete (length 38.6 mm, width 12.8 mm, height 15.4 mm). One inch equals 25.4 mm, so the fragment is small. According to one researcher, “the abrupt termination of all lines on the fragment edges indicates that the pattern originally extended over a larger surface.” Sometimes, that is how tenuous human knowledge or belief can be.



B&B orig: 9/12/18

Cognitive Impairment Associated with Radical Political Beliefs

(A) Using factor analysis, we investigated the underlying factor structure of multiple questionnaires about political issues. Three latent factors were identified and labeled “political orientation,” “dogmatic intolerance,” and “authoritarianism” according to the pattern of individual item loadings. Item loadings for each question (questionnaires indicated by different colors) are presented. (B–D) To investigate the relation between these constructs, scores on the three factors were extracted for each individual. (B) We observed a quadratic relationship between political orientation and dogmatic intolerance, revealing that people on the extremes of the political spectrum are more rigid and dogmatic in their world views. (C) A linear relationship between political orientation and authoritarianism was observed, with people from the far right of the political spectrum showing more obedience to authorities and conventions. (D) Dogmatic intolerance and authoritarianism were positively correlated, indicating commonality between these two sub-components of radicalism.

Widening polarization about political, religious, and scientific issues threatens open societies, leading to entrenchment of beliefs, reduced mutual understanding, and a pervasive negativity surrounding the very idea of consensus. Such radicalization has been linked to systematic differences in the certainty with which people adhere to particular beliefs. However, the drivers of unjustified certainty in radicals are rarely considered from the perspective of models of metacognition, and it remains unknown whether radicals show alterations in confidence bias (a tendency to publicly espouse higher confidence), metacognitive sensitivity (insight into the correctness of one’s beliefs), or both Max Rollwage et al., Current Biology, Vol. 28, Iss. 24, Pgs. 4014-4021, Dec. 17, 2018

Metacognition: awareness and understanding of one's own thought processes, roughly, self-awareness.

Confidence bias (overconfidence effect): a bias observed as a person’s subjective confidence in his or her judgements being greater than the objective accuracy of those judgements, especially when confidence is relatively high; overconfidence is one example of a miscalibration of subjective probabilities.

Motivated reasoning: a powerful emotion-biased decision-making phenomenon; the term refers to the role of motivation in cognitive processes such as decision-making and attitude change in a number of situations, including cognitive dissonance reduction, e.g., in the face of discomforting information or logic.

The journal Current Biology published a paper, Metacognitive Failure as a Feature of Those Holding Radical Beliefs, with some evidence that individuals who hold radical beliefs tend to lack self-awareness relative to others. This mindset was associated with higher confidence in correct and incorrect choices, and a reduced tendency to change levels of confidence in the face of new but contrary information.

The researchers pointed out that multiple cognitive effects could be going on that would account for the observed opinionation and resistance to change among radicals. Influences the researchers tried to dissect included motivated reasoning, confidence bias, and metacognition:
An unjustified certainty in one’s beliefs is a characteristic common to those espousing radical beliefs, and such overconfidence is observed for both political and non-political issues, implying a general cognitive bias in radicals. However, the underpinnings of radicals’ distorted confidence estimates remain unknown. In particular, one-shot measures of the discrepancy between performance and confidence are unable to disentangle the contributions of confidence bias (changes in an overall belief about performance, which may be affected by optimism and mood) from changes in metacognitive sensitivity (an ability to distinguish accurate from inaccurate performance). This distinction may be particularly important as changes in metacognitive sensitivity may account for radicals’ reluctance to change their mind in the face of new evidence.

This research does not shed light on the direction cause and effect. Commenting on the paper, Steven Novella writes:
What this study cannot tell us about is the arrow of cause and effect. One possibility is that those who lack the metacognitive ability to properly assess and correct their own confidence levels will tend to fall into more extreme views. Their confidence will allow them to more easily brush off dissenting opinions and information, more nuanced and moderate narratives, and the consensus of opinion.

At the same time I find it plausible that those who become radicalized into extreme political views may adopt overconfidence and stubbornness as a motivated reasoning strategy, in order to maintain their views, which they hold for emotional and identity reasons. This may become more of a general cognitive style that they employ, rather than being limited to just their radical views.

The results described here are asserted to come from the first attempt to tease cognitive processes apart to determine the cognitive and social sources of radicalism. Because of that, the research needs to be replicated and expanded to generate confidence in the results and conclusions. It is reasonable to think that multiple influences lead to radicalization including life experiences, personality, self and social identity, etc.

On replication of this research, it may turn out that a major source of radicalization, maybe the most important source, is impaired metacognition, as the researchers propose. In that case, there is a large body of research and real world experience with methods to teach enhanced metacognitive skill. The education community is fully aware of the usefulness of metacognition in education.

One reader of the 2009 Handbook of Metacognition in Education wrote this in his forward to the handbook: “This handbook goes a long way toward capturing the state of the science and the art of the study of metacognition. It reveals great strides in the sophistication and precision with which metacognition can be conceptualized, assessed, and developed [and] covers the gamut, including research and development on metacognition across a wide variety of subject-matter areas, as well as in more abstract issues of theory and measurement . . . . It is truly a landmark work.”

Maybe there is some hope for deradicalization of radicals and prevention of radicalization in minds susceptible to it. Of course, that begs the question of whether radical beliefs are usually more harmful than beneficial. There appears to be at least some research on that point.[1] For at least some uninformed people, ‘common sense’ might suggest radicalism is generally not a good thing.

Footnote:
1. From the article, Radical Beliefs and Violent Actions Are Not Synonymous: How to Place the Key Disjuncture Between Attitudes and Behaviors at the Heart of Our Research into Political Violence: This article develops and elaborates on three core points. First, as with research into other social science themes, it is argued that it is necessary to apply the logic of correlation and causality to the study of political violence. Second, it highlights the critical disjuncture between attitudes and behaviors. Many or most individuals who support the use of political violence remain on the sidelines, including those who sympathize with insurgents in Afghanistan (reportedly 29 percent in 2011), and those supportive of “suicide attacks” in the Palestinian Territories (reportedly reaching 66 percent in 2005). Conversely, those responsible for such behaviors are not necessarily supportive of the ostensible political aims. Third, it is argued that the motives that drive these attitudes and behaviors are often (or, some would argue, always) distinct. While the former are motivated by collective grievances, there is substantial case study evidence that the latter are commonly driven by economic (e.g., payments for the emplacement of improvised explosive devices), security-based (i.e., coercion) and sociopsychological (e.g., adventure, status, and vengeance) incentives. Thus, it is necessary for the research community to treat attitudes and behaviors as two separate, albeit interrelated, lines of inquiry.

B&B orig: 12/21/18

Does The Human Mind Create Illusions?

Image from a Mike Huckabee political ad - the unconscious mind treats the cross in the background as symbolic of Christianity and that instantly (within about 50-100 milliseconds) primes the unconscious mind to think positively or negatively about the accompanying message that Huckabee is trying to convey

CONTEXT: In a recent discussion here, one commenter rejected the idea that the human mind can and does create illusions. The idea was dismissed as woo, whatever that is. The logic went something like this: After all, a person can see an object, e.g., my dog snoring on the floor, and know that it is real and it is where it is, i.e., on the floor in my living room next to the boxes of dynamite I keep for self-defense. Other people can see it too, so there is no illusion. Therefore, what is this illusion nonsense? Reasonable informed people cannot possibly believe in illusions.

Things like dogs (and boxes of dynamite) lying on the floor are not illusions, at least in the important respects that humans rely on to go about their lives. Obviously the human cannot perceive everything about the dog because we are limited to interpreting information our senses can detect with the level of sensitivity our senses operate on. But for everyday life, information beyond human senses can be considered irrelevant and the dog is not an illusion in any significant sense.

So, no illusion, right? And, please, don't throw optical illusions in my face. I'm talking about meaningful illusions, not parlor tricks.

Parlor trick

The sources of illusion: Actually, optical illusions such as illusory rabbit and invisible rabbit are more than parlor tricks. They show how the human mind creating illusions in real time. But that's not the focus here. This channel is focused on politics and what people think they see and how they think about what they think they saw. The illusions that smart political manipulators and propagandists routinely create to win hearts and minds are absolutely central to politics.

In his 1991 book, The User Illusion, science writer Tor Norretranders summarized some of the neuroscience and cognitive science knowledge on how the human mind works and its data processing capacity. Norretranders describes the experiments of Benjamin Libet from the mid-1980s showing that the mind creates a time shift illusion about when we think we make a decision and when our minds actually made the decision. It turns out that our unconscious mind usually (> 99% of the time ?) makes a decision (1) about 0.5 second before we are consciously aware of the decision, but (2) we consciously believe we made the decision about 0.5 seconds after it was actually made. In other words, the mind creates the illusion that the conscious mind is in control and making decisions when that simply is not true. Later research pushed the time shift illusion out to about 7-10 seconds for at least certain kinds of decisions.

The importance of recognizing the time shift illusion is that the human mind can create illusions all the time and easily, and it happens fast. Our unconscious minds do most of the perceiving, thinking and deciding before we are consciously aware.

Norretranders and later researchers describe the vast difference in data processing power between the unconscious and conscious minds. Current estimates for the unconscious mind put bandwidth at about 11.1 million bits of information per second, with most of that being discarded as trivial. This happens in real time all of the time. The unconscious mind does parallel processing and draws on unknown thousands of memories available to it, but not to consciousness. By contrast the conscious mind can only do serial processing at a maximum of about 500 bits per second and it works from a maximum of 9 memories in working memory at any given time. Unconsciousness is fast and effortless. Consciousness is slow, easily distracted, easily tired and is largely trapped by the unconscious mind. The consciousness trap is this: For politics-related information that contradicts or undermines existing political beliefs, morals and/or ideology, the conscious mind looks to defend beliefs and decisions the unconscious mind has made, even when the beliefs and decisions are objectively false or bad.

In his 2012 book, social psychologist Johnathan Haidt describes the situation: “The mind is divided into parts, like a rider (consciousness) on an elephant (unconsciousness). The rider evolved to serve the elephant. . . . . intuitions come first, strategic reasoning second. Therefore, if you want to change someone’s mind about a moral or political issue, talk to the elephant first. . . . . moral intuitions (i.e., judgments) arise automatically [unconsciously] and almost instantaneously, long before moral reasoning [conscious reasoning] has a chance to get started, and those first intuitions tend to drive our later reasoning. . . . . The rider is our conscious reasoning—the stream of words and images of which we are fully aware. The elephant is the other 99 percent of mental processes—the ones that occur outside of awareness but that actually govern most of our behavior. . . . . We do moral reasoning not to reconstruct why we ourselves came to a judgment; we reason to find the best possible reasons why somebody else ought to join us in our judgment. . . . . The rider is skilled at fabricating post hoc explanations for whatever the elephant has just done, and it is good at finding reasons to justify whatever the elephant wants to do next. . . . . We make our first judgments rapidly, and we are dreadful at seeking out evidence that might disconfirm those initial judgments.”

In addition to the time shift illusion is the intuitive-emotional nature of cognition or perception. This applies to what we see and hear about politics-related information. In their 2016 book, Democracy for Realists: Why Elections Do Not Produce Responsive Government, social scientists Christopher Achen and Larry Bartels argue that existing evidence points to the inherent limitations of mental data processing and a process of perceiving and thinking about politics-related messaging. They describe the process like this: “. . . . the typical citizen drops down to a lower level of mental performance as soon as he enters the political field. He argues and analyzes in a way which he would readily recognize as infantile within the sphere of his real interests. . . . cherished ideas and judgments we bring to politics are stereotypes and simplifications with little room for adjustment as the facts change. . . . . the real environment is altogether too big, too complex, and too fleeting for direct acquaintance. We are not equipped to deal with so much subtlety, so much variety, so many permutations and combinations. Although we have to act in that environment, we have to reconstruct it on a simpler model before we can manage it.”

In their 2013 book, The Rationalizing Voter, social scientists Milton Lodge and Charles Taber discuss leading hypotheses about how the human mind deals with politics-related information: “We are witnessing a revolution in thinking about thinking. Three decades of research in the cognitive sciences, backed by hundreds of well-crafted behavioral studies in social psychology and new evidence from the neurosciences, posit affect-driven dual process models of thinking and reasoning that directly challenge the way we political scientists interpret and measure the content, structure, and relationships among political beliefs and attitudes. Central to such models is the distinction between conscious and unconscious thinking, with hundreds of experiments documenting pervasive effects of unconscious thoughts and feelings on judgment, preferences, attitude change, and decision-making.”

Another leading researcher commenting on Lodge and Taber’s book wrote this: “The central question in the study of political psychology and public opinion is whether citizens can form and update sensible beliefs and attitudes about politics. Though previous research was skeptical about the capacities of the mass public, many studies in the 1980s and early 1990s emphasized the potential merits of simple heuristics in helping citizens to make reasonable choices. In subsequent years, however, motivated reasoning has been impossible to avoid for anyone who follows either contemporary politics or the latest developments in psychology and political science. . . . . it is increasingly difficult for observers to defend micro-level attitude formation and information processing as rational or even consistently reasonable. Evidence continues to mount that people are often biased toward their prior beliefs and prone to reject counter-attitudinal information in the domains of both opinions and politically controversial facts.”

It important to point out that when objective fact and truth and sound logic accord with existing beliefs, morals and ideologies, there is no need for, and the mind does not, significantly distort fact and logic. In a real sense, the mind is a stubborn, easily self-deluded beast that wants to world to be what it wants it to be, even if that isn't the case. There is now a basis in neuroscience that explains this at least partly in terms of the strength of neural pathways and now neural pathways become strong, e.g., by repeating a political lie or spouting bogus reasoning (logic) often enough, the mind often comes to accept it as true.

In summary, all of that research points to a mind that, for reasons related to evolution, routinely distorts politics-related information that is perceived and how that distorted reality is further distorted unconsciously so as to make what is perceived better fit with existing beliefs, morals and political and religious ideologies. To the extent perceptions, thinking and decisions conflict with objective fact and unbiased logic, the mind produces illusion. All of that distortion and illusion is also powerfully influenced, unconsciously once again, by society and social institutions that people are, for better or worse, trapped in. In his 1963 book, Invitation to Sociology, sociologist Peter Berger wrote: “Society not only controls our movements, but shapes our identity, our thought, and our emotions.” Social institutions are therefore, to a significant extent, “structures of our own consciousness.”

If one accepts the foregoing science and its description as more true and valid than not, then one can see a basis to argue that, at least for politics, there is some or even a great deal of illusion that people of all political persuasions are subject to. That is not a criticism of human intelligence. It is a statement of biological, psychological and sociological fact and illusion is grounded in human evolutionary heritage.

Although these comments are focused on politics, the evidence shows the same applies to other areas of human activity, especially religion, but also to varying degrees to science and commerce. If one comes to accept that description of the human mind, one can easily come to see humans, their societies, and all their limitations in a very different light.

B&B orig: 1/17/19

Lazy Thinking: A Source of False Beliefs


This discussion is based on this research paper and this article about it in the New York Times. The paper is entitled Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning and the researchers are Gordon Pennycook and David G. Rand who are at Yale.

The “laziness hypothesis” of belief formation challenges the “hijacked hypothesis”, or maybe complements it: The current mainstream explanation for irrational thinking in politics is because our ability to reason is subverted by our partisan beliefs, ideology and moral mindset. The ‘hijacked hypothesis’ (my moniker) holds that conscious rational thinking is applied mainly to defend existing belief, ideology and morals. That kind of thinking is considered to reflect a powerful unconscious bias called motivated reasoning. The rationale holds that our conscious minds do not operate critically assess whether asserted facts and associated reasoning are true and make logical sense.

That idea faces competition from another theory, the ‘laziness hypothesis’ (my moniker). In a new paper, Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning, the authors posit that the explanation for why people are susceptible to fake news is more a matter of mental laziness than conscious reason being hijacked by the motivated reasoning bias. The authors describe their research goal and their findings:
Here we contrast two broad accounts of the cognitive mechanisms that explain belief in fake news: A motivated reasoning account [the hijacked hypothesis] that suggests that belief in fake news is driven primarily by partisanship, and a classical reasoning account where belief in fake news is driven by a failure to engage in sufficient analytic reasoning [the laziness hypothesis]. . . . . Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? . . . . Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

What some of the data looks like: In the graph below, a score of -1.0 applies to people who believe 100% of fake news. A score of +1.0 applies to people who believe 100% of real news. A score of 0.0 applies to people who cannot distinguish fake from real news to any extent. Deliberative refers to people who are mostly conscious analytic thinkers, and intuitive refers to unconscious intuitive-emotional-moral thinkers. The good news is that regardless of their main mode of thinking, most people can distinguish real from fake to some extent. The data showing that deliberative thinkers are better at rejecting fake news supports the idea that laziness is more important than motivated reasoning in why people believe or disbelieve fake news. This is evidence that the laziness hypothesis is a better explanation for the data.


The authors summarize two of the three studies their paper discusses:
Across two studies with 3446 participants, we found consistent evidence that analytic thinking plays a role in how people judge the accuracy of fake news. Specifically, individuals who are more willing to think analytically when given a set of reasoning problems (i.e., two versions of the Cognitive Reflection Test) are less likely to erroneously think that fake news is accurate. Crucially, this was not driven by a general skepticism toward news media: More analytic individuals were, if anything, more likely to think that legitimate (“real”) news was accurate. . . . . More analytic individuals were also better able to discern real from fake news regardless of their political ideology, and of whether the headline was Pro-Democrat, Pro-Republican, or politically neutral; and this relationship was robust to controlling for age, gender, and education.



What to do next? The authors discuss the problem, but not possible solutions:
Contrary to the popular Motivated System 2 Reasoning account of political cognition [the hijacked hypothesis], our evidence indicates that people fall for fake news because they fail to think [the laziness hypothesis]; not because they think in a motivated or identity-protective way. This suggests that interventions that are directed at making the public more thoughtful consumers of news media may have promise. Ironically, the invention of the internet and social media – which resulted from a great deal of analytic thinking – may now be exacerbating our tendency to rely on intuition, to the potential peril of both ourselves and society as a whole. In a time where truth is embattled, it is particularly important to understand of whom (and why) inaccurate beliefs take hold.

From this observer’s point of view, it is also important to test ways to nudge the public into being more deliberative or critical consumers of news media. As PD points out in his discussion of this paper at his channel, Books & Ideas, that seems to be hard to do:
I've argued (against the current grain) that learning and practicing critical thinking skills and learning civics in an emotionally engaging setting would go a long way in building up "rationality-muscles" that have long atrophied in the age of click-of -the-mouse news and communications generally. . . . . I do know how hard it is to awaken sincere critical thinking in people of any age. . . . . Critical thinking, reading and writing, and other skills are not readily internalized by many students. What I found, for what it's worth, is that the key was to find something that awakens curiosity and emotional interest.

How does one go about building a critical thinking mindset? This sounds like a one mind at a time endeavor. That seems to be a task for public education. At least in that realm, methods to teach critical thinking exist and maybe existing knowledge would sufficient if funded and applied on a nationwide scale. It seems to be a goal that requires long-term effort. This seems to be a prickly problem.

B&B orig: 1/24/19

The Morality Of Framing Issues In Politics


In framing political issues, one is presenting their perception of reality, facts and logic to persuade hearts and minds. In essence, a frame is just the words, images and biological effects of how one describes one's own version of reality, reason, and, good and bad.

Good frames: Good (effective) frames are ones that are persuasive to the most number of people that can be reached and influenced. Some people aren't persuaded by anything and thus this tactic fails for those people. Good political frames are characterized by simplicity, stickiness (memorability), appeal to emotion and ideology or values, implicit or explicit identification of the good guys (the framer and his argument), the bad guys (the opposition and their policy) and the victim (people abused by the bad guys and their policies).

Practical and psychological impacts of frames: Frames can be very powerful. Some experts argue that politics for smart politicians is a matter of framing and reframing. Not smart politicians make the mistake of ‘steppping into their opponent's frame’, which significantly undermines the not-smart politician's argument and power to persuade. If you make that mistake, this is what usually results:
1. You give free airtime to your opponent’s frame, including his images, emotions, values and terminology
2. You put yourself on the defensive
3. You usually have a heavier burden of proof to dislodge the opponent’s frame because lots of contrary evidence and explanation is needed to overcome a little evidence, including lies, that supports the frame
4. Your response is often complex and vulnerable because complicated responses to rebut simple frames are usually needed

Examples of stepping into an opponent's frame include:
1. Hillary Clinton trying over and over to explain a simple emailgate frame that was held against her. It was a disaster. Despite Clinton's obvious intelligence, she never rebutted the frame on an equal biological footing by staying in that frame. That was not smart politics.
2. Trying to rebut the ‘illegal immigrant’ frame by including the phrase ‘illegal immigrant’ in the rebuttal. That just keeps reinforcing the concept ‘illegal’. Instead, the smart politician never steps into that frame and instead always refers to ‘undocumented workers’, ‘undocumented children’ or something like that.
3. The frame: An allegation by a politician who wants to get rid of a bureaucracy that the bureaucracy has insufficient expertise. Stepping into that frame in rebuttal with multiple true facts: (i) we have lots of expert engineers, (ii) they are constantly getting updated training, (iii) the situation is complicated and we are analyzing means for corrective action, (iv) our track record has been excellent in the past. The framer then demolishes the whole in-frame rebuttal by simply asserting: Right, your engineers are constantly getting updated training because they don't have the necessary expertise. Those four defenses provided the framer with four opportunities to blow his opponent out of the water.

Lesson: Never step into your opponent's frame. If you do, you usually lose the persuasion war.

Consequence: Political rhetoric often sounds like people talking past each other, because they are talking past each other to avoid stepping into each other's frame.

Reframing: To avoid an opponent's frame, you need to reframe.
Examples:

1. Frame: Illegal immigrants
Reframe: Illegal employers and/or undocumented workers

2. Frame: You call women bad names and are thus unfit for office
Reframe (metaframe in this case, i.e., attack the frame itself): Political correctness has run amok and that's what's causing this country to fail, so don't tell me about unfitness for office - I'm not politically correct and am proud of it because that's what this country needs (the actual dance between Megan Kelly and candidate Donald Trump is at footnote 1)

3. Frame: A politician's powerful and critically needed male ally has been found to send sexist text messages and the politician (Australia's prime minister, Julia Gillard, in this case) is accused of condoning sexism
Reframe: The prime minister's metaframe rebuttal accuses her accuser of sexism: “I will not be lectured about sexism and misogyny by this man (the opposition leader making the allegation). I will not. And the Government will not be lectured about sexism and misogyny by this man. Not now, not ever. The Leader of the Opposition says that people who hold sexist views and who are misogynists are not appropriate for high office. Well I hope the Leader of the Opposition has got a piece of paper and he is writing out his resignation. Because if he wants to know what misogyny looks like in modern Australia, he doesn’t need a motion in the House of Representatives, he needs a mirror. . . . .”

Is framing immoral?: Here are competing visions of morality. - the idealist: framing is dangerous and a form of populism I would never resort to (is that a frame, whether idealist likes it or not?)
- the scientist (political pragmatist, not political ideologue): framing is a moral imperative to influence public opinion, e.g., about climate change, using ‘good frames’

- the conservative: calling illegal immigrants undocumented workers is immoral because it hides the truth of their illegal status
- the liberal: calling undocumented workers illegal immigrants is immoral because it hides the truth of their contributions to society and how they make our lives better

- the campaign manager: the opposition claims it is tough on crime, which implies we aren’t even though we are tougher than they are, e.g., we prosecute white collar criminals and they don’t – the moral implications of framing is irrelevant, we need a better frame and need it right now – the real moral issue is their false frame, not our framing of our true position
- the philosopher: ‘What is – and what is not – a frame? There is no such thing as objective reality. Everyone perceives things differently, so there cannot be a single criterion for determining whether or not a certain message constitutes a frame. One person’s calculated frame is another person’s principled standpoint.

- the politician: ‘Personally speaking, I am against frames, and I would not consider using them under normal circumstances. However, our opponents keep coming up with powerful frames that help them to attract voters and sway public opinion. I believe we have no choice but to participate in the game of framing of reframing.’
- the lecturer: great minds (Marx, Hobbes, etc) have used simple phrases and turns of phrase – that’s not simplicity, superficial, one-dimensional or small-minded; Marx: the rich get rich, the poor get poorer; Hobbes: a man is a wolf to man
- the journalist: a famous quote by the American journalist H.L. Mencken states: “For every complex problem there is an answer that is clear, simple, and wrong.” This is a perfect example of a frame.
- the historian: Ronald Reagan once said “Facts are stupid things,” and was widely dismissed as a trivial, shallow B-movie actor. But, when Nietzsche said “There are no facts, only interpretations,” his words were hailed as a profound philosophical insight.


A current example: “But then, in early 2015, the FCC jettisoned this successful, bipartisan approach to the Internet. On express orders from the previous White House, the FCC scrapped the tried-and-true, light touch regulation of the Internet and replaced it with heavy-handed micromanagement. It decided to subject the Internet to utility-style regulation designed in the 1930s to govern Ma Bell.” Ajit Pai, Trump's FCC chairman's written statement from last week in advance of an FCC vote that reversed existing net neutrality rules (discussed here).

Pai's frame, repeated many time in written and public statements, is ‘light touch’ regulation instead of ‘heavy-handed micromanagement’. In this case, the frame was accompanied by lies about the origin of the original FCC net neutrality rules, and the originally bipartisan nature of support for net neutrality. Embedded in this frame are at least two objectively provable lies based on a neutral reading of public records.

Questions: Is framing moral, with or without embedded lies? Do lies convert an otherwise honest frame to something immoral?

Are frames with no lies immoral because they are (i) one-dimensional, oversimplifications of reality, and/or (ii) blatant attempts to unfairly or unreasonably persuade people?

Does a rational assessment of morality change when one considers that framing, with or without lies, (i) is constitutionally protected free speech, and (ii) absolutely will be employed by partisans on all sides, with and without lies? In other words, does the idealist set himself up to fail by not taking into account human cognitive and social biology, which is what frames are intended to manipulate or play on.

What is the difference between framing, manipulation, and honest argument? How can one know the difference?

Source materials: Most of the material for this discussion is taken from the edX online course “Framing: Creating powerful political messages”, which is available to the public at no charge here: https://courses.edx.org/courses/course-v1:DelftX+Frame101x+3T2017/course/

The course is short and easy to comprehend. It makes it much easier to understand, (i) the reasons for the apparent incomprehensibility of most political rhetoric when people talk past each other, and (ii) politicians' (a) frequent failure to answer straightforward questions, and (b) to reply with things having nothing to do with a question.

Footnote:
1. Megan Kelly asks Trump about his misogynistic views of women. Trump reframes the question by using the strategy of meta-framing: (1) He does not to enter into the frame that he is a misogynist, and (2) he rebuts the allegation with a meta-frame, i.e., the question is not whether me (Trump) is a misogynist, but that too many politicians are politically correct - Trump himself is not politically correct and that is what the country needs.
Kelly: You’ve called women you don’t like “fat pigs”, “dogs”, “slobs” and “disgusting animals”. Your twitter account -

Trump interrupts: Only Rosie O’Donnell. (applause, cheers and much mirth)

Kelly: No it wasn’t. You twitter account- For the record, it was well beyond Rosie O’Donnell. Yes, I’m sure it was. Your twitter account has several disparaging comments about women’s looks. You once told a contestant on Celebrity apprentice “it would be a pretty picture to see her on her knees. . . . . Does that sound to you like the temperament of a man we should elect as president? . . . .

Trump: I think the big problem this country has is being politically correct. I’ve been challenged by so many people and I don’t frankly have time for total political correctness. And to be honest with you, this country doesn’t have time either. This country is in big trouble, we don’t win anymore, we lose to China we lose to Mexico, both in trade and at the border, we lose to everybody. And frankly what I say, and often times it’s fun, it’s kidding, we have a good time, what I say is what I say. And honestly Megyn, if you don’t like it, I’m sorry. I’ve been very nice to you although I could probably maybe not be based on the way you have treated me, but I wouldn’t do that. But you know what, we need strength, we need energy, we need quickness and we need brain in this country to turn it around. That I can tell you right now. (cheers and applause - crowd loves it)


 

B&B orig: 12/23/17

Religious attacks on a pragmatic political ideology



Science shows that (i) politics is mostly driven by human social and cognitive biology, while political, philosophical, religious or economic ideology is relevant but secondary, (ii) humans routinely distort facts and common sense to make the world we think we see fit with our personal morals, personal ideologies and universal human biases that we got from evolution, and (iii) most of our perceptions, thinking and beliefs about politics are driven by our unconscious minds, with conscious thinking mostly functioning to rationalize or justify what we unconsciously need (not just want) to believe whether our beliefs are true or not. We rarely consciously seek anything that undermines what we need to believe.

Over time, that understanding of human biology and politics sank in and internalized. That led to understanding why political rhetoric generally stopped making sense. Politics was significantly based on false facts and beliefs and heavily biased common sense. When reality didn’t fit personal morals and ideologies, it was distorted to make it fit. When a reality presented to us fits personal belief without distortion, it is usually accepted without question. No wonder liberals and conservatives agreed on almost nothing and rarely or never convinced each other of anything, even the truth of objectively provable facts.

That’s what was nuts. The nonsense of politics now made sense. Balance was restored to the universe. Well, at least my universe.

That new understanding also led to the realization that it might be possible to partially rationalize politics relative to what we have now by adopting a political ideology that directly contradicts the unconscious human tendency to distort reality (facts) and common sense. How effective an anti-bias political ideology might be, or even if it was possible to test it on a national scale, is not knowable without trying the experiment. Perfection in any political ideology isn’t possible based on laws of the universe (2nd law of thermodynamics) and human biology, but those barriers don’t necessarily bar something at least a little better.

As explained before, one version of a social and cognitive science-based ideology can be on three core morals or political principles, (1) less biased facts, (2) less biased common sense, and (3) an “objectively” defined conception of the public interest. Once the ideology was articulated, it was trotted out with enthusiasm and tested for people’s reactions. The response of enlightened internetizens was almost universal, often vehement, rejection and attack. Liberals, conservatives, libertarians, independents and anyone else who cared to join the ferocious onslaught did so with mucho gusto. In retrospect, that nearly universal hostile reaction should have been expected.

Oh well, live and learn. The universe was unbalanced again.

 

Religious attacks: Some years ago, Bill Nye the Science Guy debated evolution with a lead advocate of the Young Earth Theory, Ken Ham. Some observers thought that was a nutty thing to do because Science Guy was arguing geological and other current science, while Young Earth Guy was arguing contemporary evangelical Christian faith. As predicted, Science Guy and Young Earth Guy just talked past each other. One observer argued that there were two completely different debates going on. This was not a debate over anything resolvable or even testable between the two mind sets.

After digesting the fact of near-universal rejection of the science-based pragmatic, three-morals ideology, a social and cognitive science-based reason became clear. Discussing or debating modern social and cognitive science-based pragmatism, however it is posited, with liberals, conservatives, socialists, libertarians or whatever is essentially no different than Science Guy debating evolution with Young Earth Guy. Political ideology is a matter of religion, not science. Because perfect knowledge is impossible, there necessarily must be some faith in believing in any political ideology, social and cognitive science-based or not. Some social scientists explicitly put politics on the same cognitive footing as religion, e.g., Johnathan Haidt’s book “The Righteous Mind: Why Good People Are Divided by Politics and Religion.”

A social and cognitive science-based pragmatic ideology can fundamentally differ from standard “subjective” political ideologies. Such pragmatism might very well lead to less bias in facts and common sense. But that’s all beside the point. This is about a sort of religious faith. When liberals, conservatives and all the rest attack pragmatism or each other, they are in essence defending their own personal political ‘religion’.

If pragmatism is to ever gain a foothold, it will come from generational change from people who come to reject politics as usual but also reject the ideologies that have got humanity, including America, to where it is today. One ray of hope is that, if nothing else, social and cognitive science-based pragmatism does make testable hypotheses about its performance relative to other ideologies.

An oddity is that since pragmatism in't concerned about what ideology label a preferred pro-public interest policy choice is given, it could very well end up more or less aligned with liberalism, conservatism or some other ideology. In that regard, pragmatism is an agent (mind set) that’s free to support or undermine any existing ideology on any given issue.

With the basis of the opposition now reasonably explained, the universe is once again in balance. Well, at least my universe.

B&B orig: 10/12/16