CONTEXT: The human mind evolved in such a way that it places greater importance and biological responses on real and perceived threats. The survival benefit is obvious. The human mind also evolved in such a way that it perceives reality through personal biasing lenses or mental processes. Important sources of reality biasing and simplifying include personal morals, political ideology, universal innate biases such as confirmation bias and framing effects or biases, innate and learned mental rules of thumb (heuristics) such as anchoring and availability heuristics, and our social and group identities, e.g., race, political party affiliation, gender, religion, etc. Human biasing lenses usually operate unconsciously (> 98% of the time?) and perceptions of reality and beliefs are therefore often mistaken as arising from conscious reason. The degree of reality and logic distortion resulting from normal biasing is high, but it appears to be necessary for the human mind to make sense or coherence of a complex world based on information that is usually far too limited for any rational basis for coherence.
FRAMING EFFECTS: After a terror attack resulting in a murder(s) and a claim of responsibility by a terrorist(s) and/or terror group, US mainstream media sources routinely report on the individual's or group's claim of responsibility. Characterizing an attack as a "claim of responsibility" frames the attack in such a way as to glamorize the attack in the minds of individuals susceptible to terrorism appeals. That is an serious, avoidable error that the mainstream media, and politicians, routinely make.
Instead, "claims of responsibility" for terror attacks should always be framed as something such as "an admission of guilt", "admission of murder", or a "confession to the slaughter of innocents." This mode of framing is emotional and it deprives terrorists and murderous publicity seekers of the glory that standard framing of an incident permits. As discussed previously, some (or all) cognitive scientists now argue that appeal to emotion is often or usually necessary for persuasion. Proper framing saps at least some terrorist recruiting power from an organization when societies universally think of these incidents in this negative frame.
Unconscionable, immoral, harmless error or no error (just free speech)?: Framing effects are innate (hard wired), unconscious and powerful. Ignoring this well-known biological reality in public discourse about terrorism constitutes error so unconscionable that one can reasonably argue it rises to the level of being immoral. It is beyond mere incompetence. Of course, even if one were motivated to do so, proper framing will be difficult and take time. Old habits are hard to break. Mental thought habits are no exception.
THE AVAILABILITY HEURISTIC: The availability heuristic is an unconscious, reality simplifying bias[1] that gives undue cognitive weight or importance to events or ideas most easily recalled, i.e., it's readily available to conscious thought. What is usually most easily recalled are exposure to events or ideas that are the most frequent and/or recent. Repeated recent exposures reinforce the bias.
The availability heuristic tends to lead people to believe the probability of an easily recalled event is more likely to happen again and to apply personally, even when the statistical odds are low.
Although some, e.g., president Trump, have criticized the mainstream media for insufficient coverage of at least some terrorist attacks, some empirical data suggests the opposite is true. Analysis of terrorism coverage by the New York Times shows far more coverage for terrorism events than for other events that cause far more deaths. For example, for January 2015 through August 2016, the New York Times, about half of homicide coverage in the first three pages was focused on terror attacks, despite the fact that over a 15-year period that included the 9/11 attacks, terrorist murders in America accounted for less than about 2% of all homicides.
In the scheme of things, the risk of death from a terrorist attack on US soil is minuscule. Despite low personal risk, a significant number Americans nonetheless grossly overestimate the risk and frequently change their behavior to avoid what is essentially a non-existent risk.[2] This grossly flawed thinking about risk spills over into and affects politics and policy. That directly reflects bias-induced, reality-disconnected error the availability heuristic unconsciously gives rise to.
If one accepts those facts and that logic, one can again argue that mainstream media coverage of terrorism and the flawed logic it induces in both American citizens and their elected leaders reflects incompetence by both the media and our leaders. Of course, that argument should be set in the context of a mainstream media that is under constant, severe economic pressures to simply survive. Survival means selling news content for profit.
For better or worse, humans are powerfully attracted to, and/or entertained by, violence, fear and anger. The media (and politician?) imperative that "if it bleeds, it leads" is firmly grounded in economic (and political?) reality. But even with that factor in mind, both the press and politicians usually do a dismal job of conveying the overall context including constantly repeating relative risk in reporting and in political discourse. If economics requires appeal to emotion and over reporting of attacks, one can argue that there is an even higher obligation to report relevant context so that the availability and/or other biases don't distort reality more than is reasonable to expect.
Questions: Are risks of an American civilian being killed anywhere on Earth high (more than 1% per year), medium ( 0.1 to 0.99% per year) or low (less than 0.1% per year)?* Do politicians have a moral obligation to take statistical reality into account when talking about terrorism, or is politics a matter of any means (usually preferably legal, but sometimes illegal is OK too) justification of the ends? Should the mainstream media reframe terrorist attacks to the extent is makes cognitive sense to do so?
* A: Low, less than 1 in 100,000/year (< 0.001%).
Information sources:
WNYC, On the Media, May 25, 2107 boradcast
Nemil Dalal, Priceonomics
Footnotes:
1. In essence, a reality simplifying bias is a way the human brain reduces the cognitive load needed to make coherence out of what a person sees, hears or otherwise experiences. Relative to the complexity of the real world, humans have astonishingly limited information processing bandwidth. Humans have no biological choice but to mentally simplify reality, even though errors (reality disconnects) frequently arise in the simplification process.
2. For example, after terrorist attacks in 2015, including an attack in Paris (130 murders) and in San Bernardino (14 murders), 53% of Americans changed their travel plans even though the risk of attack was nil.
B&B orig: 5/29/17
Pragmatic politics focused on the public interest for those uncomfortable with America's two-party system and its way of doing politics. Considering the interface of politics with psychology, cognitive science, social behavior, morality and history.
Etiquette
DP Etiquette
First rule: Don't be a jackass.
Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.
Sunday, August 11, 2019
Cognitive Science: Conspiracy Theory Belief & Teleology Bias
A conspiracy theory
Teleology: the explanation of phenomena by the purpose they serve rather than by postulated causes; a reason or explanation for something in function of its end, purpose, or goal. For example, teleological, hands are made (by God) for grasping things, vs non-teleological, evolution caused hands to evolve to grasp things. A teleological explanation includes a final cause or end goal to explain how some system or thing came into being. Teleological thinking is as aspect of thinking related to belief in creationism. It is also known as the argument from design, which argues for the God's existence or, for an intelligent creator. Teleological thinking is a powerful cognitive bias for people who tend to apply this form of thinking to the real world and it has influenced religious thinking for millennia.
A team of European researchers recently published evidence that people who tend to accept conspiracy theories often employ teleological thinking as a basis for belief in conspiracies. It is important to note that the evidence amounts to a correlation, not an always-present cause and effect relationship. In other words, the evidence is that believing in final causes (teleological thinking) correlates with conspiratorial thinking.
In their article, the researchers write: “Teleological thinking — the attribution of purpose and a final cause to natural events and entities — has long been identified as a cognitive hindrance to the acceptance of evolution, yet its association to beliefs other than creationism has not been investigated. Here, we show that conspiracism — the proneness to explain socio-historical events in terms of secret and malevolent conspiracies — is also associated to a teleological bias. Across three correlational studies (N > 2000), we found robust evidence of a teleological link between conspiracism and creationism, which was partly independent from religion, politics, age, education, agency detection, analytical thinking and perception of randomness. As a resilient ‘default’ component of early cognition, teleological thinking is thus associated with creationist as well as conspiracist beliefs, which both entail the distant and hidden involvement of a purposeful and final cause to explain complex worldly events.”
In an article misleadingly entitled, ‘Scientists discover the reason people believe in conspiracy theories’, one mainstream media source discussed this research. Referring to the research, the article comments: “They found that conspiracy theorists are more likely to think ‘everything happens for a reason’ and things are ‘meant to be’, an approach they share with another group often considered extreme in their beliefs: creationists.
We find a previously unnoticed common thread between believing in creationism and believing in conspiracy theories,” said Dr Sebastian Dieguez of the University of Fribourg, one of the researchers behind the study.
‘Although very different at first glance, both these belief systems are associated with a single and powerful cognitive bias named teleological thinking, which entails the perception of final causes and overriding purpose in naturally occurring events and entities.’”
In other comments on their data, the researchers observe: “Although teleological thinking has long been banned from scientific reasoning, it persists in childhood cognition, as well as in adult intuitions and beliefs. . . . . the ‘everything happens for a reason’ or ‘it was meant to be’ intuition at the heart of teleological thinking not only remains an obstacle to the acceptance of evolutionary theory, but could also be a more general gateway to the acceptance of antiscientific views and conspiracy theories.”
Prior research had shown other cognitive characteristics of people who tend to believe in conspiracy theories. For example, individuals who are intolerant of uncertainty and seek cognitive closure share a trait called the need for cognitive closure. Evidence indicates that that trait seems foster, or at least correlate with, conspiracy beliefs about events that have no clear official explanation.
This research represents another step in our incremental evolution of understanding the biology of how people perceive and think about issues in politics and other aspects of life. The mental processes that underpin our perceptions and thoughts are often heavily influenced by our innate biology. In turn, that biology is shaped by both nature, and probably more importantly, nurture. Our culture, families, social identities, personal morals and other factors are all at play in shaping the world we perceive, whether the perception is accurate or not.
Most of this thinking and bias influence arises unconsciously. We are simply not aware of these things, unless we are told about them. And, even when told, many or most people cannot effectively internalize the knowledge. Mindsets are very hard to change. Is that what God intended or is it what arose naturally from evolution?
B&B orig: 8/24/18
Saturday, August 10, 2019
An emotion self-control method
Given the increasing heat and reason-killing emotion that seemed to be occurring recently, a suggestion about a way to maintain self-control seems to be in order.
A person's emotional state affects unconscious and conscious reason in perceptions of reality, discourse and thinking. Emotion is now believed to be a necessary part of cognition, conscious reasoning and moral decision-making. Despite that, out-of-control emotion tends to degrade the quality of reasoning, leading to beliefs or decisions that are objectively less rational and/or detrimental to the individual. Reasonable control of emotion should be generally helpful to people in their everyday lives.
One scientist observes (pdf) that “the neuronal channels going up from the emotional centers of the brain to the more cognitive centers are denser and more robust than the cognitive centers going down to inhibit and control the emotional structures. Self-conscious efforts to avoid prejudice, fear, hatred, and depression are often rendered unsuccessful by this imbalance.”
In other words, emotional self-control often isn’t easy because our brains are wired that way. That’s just normal human biology.
Research psychologists recently published a paper showing that thinking or talking to yourself in the third person helps maintain emotional control in the face of events or information that provoke emotion and a potential loss of self-control. For politics, that means disagreement over political issues, most of which are highly emotionally charged.
Writing in Scientific Reports (vol. 7, Article 4519, published online July 3, 2017), lead scientist Jason Moser reported: “We hypothesized that it does under the premise that third-person self-talk leads people to think about the self similar to how they think about others, which provides them with the psychological distance needed to facilitate self-control. We tested this prediction by asking participants to reflect on feelings elicited by viewing aversive images (Study 1) and recalling negative autobiographical memories (Study 2) using either “I” or their name while measuring neural activity via ERPs (Study 1)[1] and fMRI (Study 2). . . . . Together, these results suggest that third-person self-talk may constitute a relatively effortless form of self-control. . . . . Specifically, using one’s own name to refer to the self during introspection, rather than the first-person pronoun ‘I’, increases peoples’ ability to control their thoughts, feelings, and behavior under stress.”
Commenting on the study, Government Executive writes: “‘Essentially, we think referring to yourself in the third person leads people to think about themselves more similar to how they think about others, and you can see evidence for this in the brain,’ says Jason Moser, associate professor of psychology at Michigan State University. ‘That helps people gain a tiny bit of psychological distance from their experiences, which can often be useful for regulating emotions.’ . . . . ‘What’s really exciting here,’ says [senior researcher Ethan] Kross, who directs the Emotion and Self-Control Lab, ‘is that the brain data from these two complimentary experiments suggest that third-person self-talk may constitute a relatively effortless form of emotion regulation. If this ends up being true—we won’t know until more research is done—there are lots of important implications these findings have for our basic understanding of how self-control works, and for how to help people control their emotions in daily life.’”
Footnote:
1. ERPS: event-related brain potentials, are electrical brain responses caused by sensory or cognitive stimuli such as photos or verbal information; ERPS are small but accurately measurable electrical brain responses that occurs over about a half second after a stimulus.
fMRI: functional magnetic resonance imaging, is a noninvasive method used to visualize parts of the human brain as it responds to various stimuli such as unpleasant photos, moral dilemmas or information that contradicts personal beliefs; fMRI visualizes areas of brain responses in near real time, with localized brain activity becoming visible a few seconds after a brain area has begun responding to what is seen or heard.
B&B orig: 10/21/17
Cognition And Emotion Interplay: Current Thinking
Cognition: the mental action or process of acquiring knowledge and understanding through experience, the senses, and thinking
In a section for a book to be published in the coming weeks, The Nature of Emotion: Fundamental Questions (2nd edition). New York: Oxford University Press, Hadas Okon‐Singer (HOS) and colleagues address the question of how emotion and cognition interact. Their paper, The Interplay of Emotion and Cognition ( pdf), describes current thinking about emotion’s role in perceiving the world and information, thinking about it and understanding it. The implications of current research for both the clinical medicine of psychological disorders are profound. Although HOS does not focus in it, the same implications hold for politics.
This short paper illustrates how quickly understanding of matters that until recently were the domain of philosophers is expanding. Modern neuroscience, and psychological and clinical research is making rapid inroads into understanding emotion. HOS comments: “Until the 20th century, the study of emotion and cognition was largely a philosophical matter. Although contemporary theoretical perspectives on the mind and its disorders remain heavily influenced by the introspective measures that defined this earlier era of scholarship, the last several decades have witnessed the emergence of powerful new tools for objectively assaying emotion and brain function, which have yielded new insights into the interplay of emotion and cognition.”
The basic interpretation of from existing data that HOS draws is simple but profound: “Emotion—including emotional cues, emotional states, and emotional traits—can profoundly influence key elements of cognition in both adaptive and maladaptive ways.” Until recently, dominant scientific belief was that emotion was a reality and logic distorting influence, and thus it was generally maladaptive or detrimental for rational cognition (seeing and thinking). HOS makes clear that emotion can be helpful. Other researchers have come to the same conclusion. For example, Philip Tetlock, a researcher who analyzes the quality of expert judgment in politics and related topics such as national security and economics, believes that, among other things, consciously controlled emotion is an essential part of accurate expert judgment. (see discussion here)
HOS comments that since the world is far more complex than the human mind can deal with, emotion is a mechanism the mind relies on to help focus attention on what’s important. Citing other researchers, HOS observes that “attention is necessary because . . . . the environment presents far more perceptual information than can be effectively processed, one’s memory contains more competing traces than can be recalled, and the available choices, tasks, or motor responses are far greater than one can handle” Things like angry faces, erotica (sex!) and snakes are far more attention-grabbing than non-emotional inputs. HOS summarizes this point: “Emotional stimuli are associated with enhanced processing in sensory regions of the brain and amplified processing is associated with faster and more accurate performance.” Clearly, emotion can be adaptive or helpful.
Anxiety: Regarding anxiety disordersm HOS observes that “Individuals show marked differences in the amount of attention they allocate to emotionally salient information. Such attentional biases are intimately related to emotional traits and disorders. Hypervigilance for threat is a core component of both dispositional and pathological anxiety. . . . Anxious individuals are more likely to initially orient their gaze towards threat in free‐viewing tasks; they are quicker to fixate threat‐related targets in visual search tasks; and they show difficulty disengaging from threat‐related distractors . . . . There is compelling evidence that attentional biases to threat causally contribute to the development and maintenance of extreme anxiety.”
Working memory, the mind’s blackboard: Working memory actively recalls, maintains and manipulates (thinks about) information for short periods of time when one is consciously focused on something or a mental task. The amount of such information is very limited. HOS comments that “information transiently held in working memory is a key determinant of our momentary thoughts, feelings, and behavior. Recent work by our group indicates that emotionally salient information enjoys privileged access to working memory. . . . anxious individuals allocate excess storage capacity to threat, even when it is completely irrelevant to the task at hand and no longer present in the external world.”
In other words, emotion is a powerful influence on perceptions of reality and thinking about what is perceived.
Emotional control strategies: Research now shows that some emotion control techniques can effectively tamp down emotional responses. The basis for this is increasingly well understood. HOS points out that “. . . the neurobiological underpinnings of this core human capacity [to control emotion] indicates that circuits involved in attention and working memory play a crucial role in the regulation of emotion and other, closely related aspects of motivated behavior, such as temptation and craving.” The biology of such traits is coming into focus.
One effective strategy is to simply divert attention from emotional or distressing sources or inputs such as disturbing videos, photos or speech. Effects of doing this are observable in brain structures, e.g., the amygdala, that regulate emotional states or feelings. Another emotion-damping technique is to consciously reframe[1] or reassess emotional inputs. As discussed previously, another emotion control mechanism is to think in third person terms instead of first person terms.
HOS concludes by observing that “the last decade has witnessed an explosion of interest in the interplay of emotion and cognition and greater attention to key methodological and inferential pitfalls.” Intrusion of philosophers into neuroscience has no doubt raised concern for pitfalls in both experimental methods and in how the resulting data can be interpreted.
Footnote:
1. Framing effects refer to a powerful innate cognitive bias ( https://en.wikipedia.org/wiki/Framing_effect_(psychology) ). It leads the mind to perceive, think about and then make judgements about a situation or an issue that comes to one’s attention depending on how information is framed. Careful framing leads to judgments that vary in often or usually predictable ways. People thus tend to make judgments based on the framework in which information or a situation is presented. In politics, framing ideas, issues and people is also called spinning.
A Chilean Naval Ship
B&B orig: 11/5/17
Cognitive Science: Reason as a Secular Moral
A 2016 peer-reviewed paper by psychologist Tomas Ståhl and colleagues at the University of Illinois at Chicago and the University of Exeter suggests that some people see reason and evidence as a secular moral issue. Those people tend to consider the rationality of another's beliefs as evidence of their morality or lack thereof.
According to the paper’s abstract: “In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. . . . Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5).” Ståhl T, Zaal MP, Skitka LJ (2016) Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue. PLoS ONE 11(11): e0166332.doi:10.1371/journal.pone.0166332.
According to Ståhl’s paper, “People who moralize rationality should not only respond more strongly to irrational (vs. rational) acts, but also towards the actors themselves. . . . . a central finding in the moral psychology literature is that differences in moral values and attitudes lead to intolerance. For example, the more morally convicted people are on a particular issue (i.e., the more their stance is grounded in their fundamental beliefs about what is right or wrong), the more they prefer to distance themselves socially from those who are attitudinally dissimilar.”
ScienceDaily commented on the paper: moral rationalists see less rational individuals as “less moral; prefer to distance themselves from them; and under some circumstances, even prefer them to be punished for their irrational behavior . . . . By contrast, individuals who moralized rationality judged others who were perceived as rational as more moral and worthy of praise. . . . While morality is commonly linked to religiosity and a belief in God, the current research identifies a secular moral value and how it may affect individuals' interpersonal relations and societal engagement.”
ScienceDaily also noted that “in the wake of a presidential election that often kept fact-checkers busy, Ståhl (the paper’s lead researcher) says the findings would suggest a possible avenue to more productive political discourse that would encourage a culture in which it is viewed as a virtue to evaluate beliefs based on logical reasoning and the available evidence. . . . . ‘In such a climate, politicians would get credit for engaging in a rational intellectually honest argument . . . . They would also think twice before making unfounded claims, because it would be perceived as immoral.’”
Since most people believe they are mostly or always quite rational, it seems reasonable to argue that rationality is a moral issue. The finding of personal value for evidence-based rational thinking about political issues suggests it be a possible basis for a political principle or moral value in political ideology.
B&B orig: 8/1/18
According to the paper’s abstract: “In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. . . . Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1–3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5).” Ståhl T, Zaal MP, Skitka LJ (2016) Moralized Rationality: Relying on Logic and Evidence in the Formation and Evaluation of Belief Can Be Seen as a Moral Issue. PLoS ONE 11(11): e0166332.doi:10.1371/journal.pone.0166332.
According to Ståhl’s paper, “People who moralize rationality should not only respond more strongly to irrational (vs. rational) acts, but also towards the actors themselves. . . . . a central finding in the moral psychology literature is that differences in moral values and attitudes lead to intolerance. For example, the more morally convicted people are on a particular issue (i.e., the more their stance is grounded in their fundamental beliefs about what is right or wrong), the more they prefer to distance themselves socially from those who are attitudinally dissimilar.”
ScienceDaily commented on the paper: moral rationalists see less rational individuals as “less moral; prefer to distance themselves from them; and under some circumstances, even prefer them to be punished for their irrational behavior . . . . By contrast, individuals who moralized rationality judged others who were perceived as rational as more moral and worthy of praise. . . . While morality is commonly linked to religiosity and a belief in God, the current research identifies a secular moral value and how it may affect individuals' interpersonal relations and societal engagement.”
ScienceDaily also noted that “in the wake of a presidential election that often kept fact-checkers busy, Ståhl (the paper’s lead researcher) says the findings would suggest a possible avenue to more productive political discourse that would encourage a culture in which it is viewed as a virtue to evaluate beliefs based on logical reasoning and the available evidence. . . . . ‘In such a climate, politicians would get credit for engaging in a rational intellectually honest argument . . . . They would also think twice before making unfounded claims, because it would be perceived as immoral.’”
Since most people believe they are mostly or always quite rational, it seems reasonable to argue that rationality is a moral issue. The finding of personal value for evidence-based rational thinking about political issues suggests it be a possible basis for a political principle or moral value in political ideology.
B&B orig: 8/1/18
Cognitive Science: Halo Error Is Inevitable Like Death & Taxes
“The attractiveness stereotype is a specific instance of a more general psychological principle known as the halo effect, in which individuals ascribe characteristics to others based on the presence of another observable characteristic. Such errors are stunningly prevalent in data derived from ratings of others to such an extent that one scholar described the problem thusly: ‘halo error, like death and taxes, seems inevitable.’” Carl Palmer and Rolfe Peterson, American Politics Research, 44(2):353–382, 2016
Halo effect: “The halo effect is a type of immediate judgement discrepancy, or cognitive bias, where a person making an initial assessment of another person, place, or thing will assume ambiguous information based upon concrete information. A simplified example of the halo effect is when an individual noticing that the person in the photograph is attractive, well groomed, and properly attired, assumes, using a mental heuristic, that the person in the photograph is a good person based upon the rules of that individual's social concept.” (presumably, ‘judgement discrepancy’ means a demonstrable deviation of personal belief from objective truth)
In their paper, Halo Effects and the Attractiveness Premium in Perceptions of Political Expertise, Palmer and Peterson observe that “halo errors are thought to be a reflection of a rater’s inability to differentiate between characteristics being evaluated, although in many circumstances, these errors occur automatically, below the level of conscious information processing.”
To some extent, inputs such as a speaker’s personal attractiveness are unconsciously translated into a belief that the speaker is more knowledgeable, competent and/or trustworthy than might be warranted by other inputs such as the content of the speech.
In their study, Palmer and Peterson conducted surveys to assess the halo effect, which earlier studies had reported. They found that as previously observed, subjective assessments attractive people were more knowledgeable and persuasive than for others. They also found that attractive people, even if uninformed, were more likely to report attempting to persuade others. In addition, people surveyed “were more willing to turn to more attractive individuals as potential sources for political information.” Those results were observed even after controlling for possibly confounding factors such as partisanship and gender.
The authors pointed out that vision of attractiveness seems to be consistent in cultural groups, and this may be a universal human cognitive trait that is relatively stable over time. They also point out that attractiveness counts in elections: “Beyond competence, there is also a clear preference for more attractive candidates, with those rated as more attractive enjoying greater electoral success . . . . . Under conditions of limited information, citizens appear to vote with their eyes, rather than their minds. It is important to note that these attractiveness biases in expressed preferences not only emerge automatically but also appear to persist in light of additional information about the candidates.”
The latter statement refers to the well-known stickiness of at least some kinds of misinformation, e.g., climate change is a hoax.
The authors speculate about the fundamental basis of democracy: “It stands to reason that we should expect these biases to creep into political discussions as well, influencing individuals’ political perceptions, orientations, and, most importantly, with whom they choose to discuss politics. This tendency to engage in biased information processing raises questions not only about the ability of citizens to make suitable evaluations of the quality of candidates but also the expertise of political discussants.”
Social science has raised similar concerns before this. Evidence of a collapse in respect for expertise seems to be solid and stable, if not increasing. Given the complexity of most political issues and the rise of relentless propaganda and disrespect for both fact and logic, there is legitimate cause for concern.
The researchers asked where political influence was coming from. They observed: “But who is trying to influence whom? Is it simply the uninformed, attractive respondents who are influencing their social contacts, or are the politically active attractive individuals well informed, as well as politically active? Given our findings from Table 1, we believe it is the former, rather than the latter. . . . . The end result is that the less informed have their perceptions of the political world shaped and their voting decisions influenced by those they perceive to be credible others. If those perceptions of expertise are mistaken beliefs influenced by an individuals’ physical appearance, many poorly informed individuals might simply be being led astray as they seek to upgrade their political knowledge. The body of evidence we present would seem to confirm these normative concerns.”
B&B orig: 8/19/18
Halo effect: “The halo effect is a type of immediate judgement discrepancy, or cognitive bias, where a person making an initial assessment of another person, place, or thing will assume ambiguous information based upon concrete information. A simplified example of the halo effect is when an individual noticing that the person in the photograph is attractive, well groomed, and properly attired, assumes, using a mental heuristic, that the person in the photograph is a good person based upon the rules of that individual's social concept.” (presumably, ‘judgement discrepancy’ means a demonstrable deviation of personal belief from objective truth)
In their paper, Halo Effects and the Attractiveness Premium in Perceptions of Political Expertise, Palmer and Peterson observe that “halo errors are thought to be a reflection of a rater’s inability to differentiate between characteristics being evaluated, although in many circumstances, these errors occur automatically, below the level of conscious information processing.”
To some extent, inputs such as a speaker’s personal attractiveness are unconsciously translated into a belief that the speaker is more knowledgeable, competent and/or trustworthy than might be warranted by other inputs such as the content of the speech.
In their study, Palmer and Peterson conducted surveys to assess the halo effect, which earlier studies had reported. They found that as previously observed, subjective assessments attractive people were more knowledgeable and persuasive than for others. They also found that attractive people, even if uninformed, were more likely to report attempting to persuade others. In addition, people surveyed “were more willing to turn to more attractive individuals as potential sources for political information.” Those results were observed even after controlling for possibly confounding factors such as partisanship and gender.
The authors pointed out that vision of attractiveness seems to be consistent in cultural groups, and this may be a universal human cognitive trait that is relatively stable over time. They also point out that attractiveness counts in elections: “Beyond competence, there is also a clear preference for more attractive candidates, with those rated as more attractive enjoying greater electoral success . . . . . Under conditions of limited information, citizens appear to vote with their eyes, rather than their minds. It is important to note that these attractiveness biases in expressed preferences not only emerge automatically but also appear to persist in light of additional information about the candidates.”
The latter statement refers to the well-known stickiness of at least some kinds of misinformation, e.g., climate change is a hoax.
The authors speculate about the fundamental basis of democracy: “It stands to reason that we should expect these biases to creep into political discussions as well, influencing individuals’ political perceptions, orientations, and, most importantly, with whom they choose to discuss politics. This tendency to engage in biased information processing raises questions not only about the ability of citizens to make suitable evaluations of the quality of candidates but also the expertise of political discussants.”
Social science has raised similar concerns before this. Evidence of a collapse in respect for expertise seems to be solid and stable, if not increasing. Given the complexity of most political issues and the rise of relentless propaganda and disrespect for both fact and logic, there is legitimate cause for concern.
The researchers asked where political influence was coming from. They observed: “But who is trying to influence whom? Is it simply the uninformed, attractive respondents who are influencing their social contacts, or are the politically active attractive individuals well informed, as well as politically active? Given our findings from Table 1, we believe it is the former, rather than the latter. . . . . The end result is that the less informed have their perceptions of the political world shaped and their voting decisions influenced by those they perceive to be credible others. If those perceptions of expertise are mistaken beliefs influenced by an individuals’ physical appearance, many poorly informed individuals might simply be being led astray as they seek to upgrade their political knowledge. The body of evidence we present would seem to confirm these normative concerns.”
B&B orig: 8/19/18
Subscribe to:
Posts (Atom)