Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Wednesday, August 7, 2019

Book Review: The Undoing Project

Author Michael Lewis' book The Undoing Project: A Friendship That Changed Our Minds (W.W. Norton & Co., 2017) describes the collaboration between Israeli psychologists Daniel Khaneman and Amos Tversky. In 2002, Khaneman won a Nobel prize in economics for his contribution to decision theory. To a large extent, their work transformed the professionalism of psychology and forced it's influence to the center of economics.

Given the generality of their work on human cognition, thinking and decision-making, it is reasonable to expect that their work will heavily influence research in many other areas of human activity over time. Whether the new knowledge will translate to American society and its thinking and behavior appears to be very unlikely for the foreseeable future.

Daniel Khaneman

For anyone interested in politics, the question of how the field of psychology went from mostly nonsense to relevant, serious science that could no longer be ignored by the 1980s makes this book well worth the money and time. The book is written for a general audience and is an easy read. It is light on technical details but nonetheless clearly conveys the state of psychology and cognitive biology and how that moved from the end of the dark ages in the 1900s to core modern relevance.

The book's central theme revolves around the intense academic relationship between two basically incompatible geniuses. Tversky was an organized but arrogant, optimistic and self-confident master of mathematical psychology. By contrast, Khaneman was disorganized, pessimistic and riddled with self-doubt, but he did have an amazing capacity to see core problems in psychology (quirks of human thinking and behavior) that the rest of the field simply could not see. Khaneman's creative insights, and his ability to articulate and experimentally get at the root of a problem were, and probably still are, astounding. Tversky's capacities were similar.

Eventually their academic relationship came to a prolonged, unpleasant end. Tversky died in 1996 of cancer, some years thereafter. Khaneman is professor emeritus at Princeton.

The book's title, The Undoing Project, refers to the effort of the two scientists to"undo", among other things,
(i) the then-dominant 'utility theory' of decision making that dominated and underpinned economic theory and belief; and
(ii) the human mind's intense desire to, and ease of, erasing (undoing) "what was surprising or unexpected."

The rational man: One area their research profoundly affected was economics and its 1700s-vintage utility theory. The theory was based on the assumption that people were usually rational in the economic decisions they made. Khaneman-Tversky research found that wasn't true.[0] One source of systematic error was a human cognitive trait of a common 'belief in small numbers'. They found that people, including professional statisticians and experimental psychologists who should know better, often drew conclusions from amounts of evidence that are too small to draw any conclusions from. The data was clear that "people mistook even a very small part of a thing for the whole." The normal human belief is that ANY sample of a large population was more representative of the population than it really was. Humans simply did not evolve to think in terms of statistics.

Heuristics: Tversky and Khaneman's research identified four basic rules (heuristics) the human mind uses to help make decisions, even when there is uncertainty of an unknowable degree. In essence, the human mind is a pleasure machine.[1] People's biological desire to avoid a loss is greater than their desire to secure a similar gain. From an evolutionary point of view, that makes sense. During evolution, people who underestimate risk tended to get eliminated from the gene pool.

Amos Tversky

The blow back: Khaneman and Tversky lost faith in decision analysis in the context of wars that Israel fought. Khaneman expressed the problem in public talks he called "Cognitive Limitations and Public Decision Making." Affecting decision making was their attempt to inject the implications of their research into high-stakes, real world decision making and government. They tried to do that by forcing experts on decision making to assign odds of events of all possible outcomes, e.g., war, peace, border skirmishes or attacks by less than all adversaries all at once.

In practice, the exercise failed. Despite their successful efforts to get Israeli intelligence agencies and politicians to understand scenarios in terms of probabilities, the data and analysis fell on deaf ears. Specifically, Israeli intelligence estimates gave a 10% increased of risk of another war if Henry Kissinger's peace efforts with Syria failed. Despite the warning, Israeli foreign minister Yigal Allon wasn't impressed and didn't work to bolster Kissinger's peace efforts. Khaneman said "That was the moment I gave up on decision analysis. No one ever made a decision because of a number. They need a story. . . . the understanding of numbers is so weak that they don't communicate anything. Everyone feels that those probabilities are not real -- that they are just something on somebody's mind."

Lewis puts it like this: "He [Allon] preferred his own internal probability calculator: his gut."

One bright spot - the young: Both Tversky and Khaneman had taught the biology of judgment to elementary or high school students and the two wrote in an unpublished manuscript that "we found these experiences highly encouraging." Lewis writes: "Adult minds were too self-deceptive. Children's minds were a different matter."

Khaneman wrote: "We have attempted to teach people at various levels in government, army, etc. but achieved only limited success."

Under the current retrograde political conditions, the public schools option seems to be the ONLY path to possibly injecting this new knowledge into mainstream American politics and society.

The lost cause: Post truth politics: Unfortunately, the impact of the new knowledge of human cognition and social behavior on politics is weak. It's not non-existent, but current political conditions strongly disfavor rationality. There's a faint pulse, at least for now, but it will be easy to kill.[2]

For decision making based on modern cognitive and social biology, the obvious and probably only path to possibly reach that lofty goal is to require at least one semester, probably two, of instruction in human cognitive and social biology for all public schools. Absent that, it's highly likely (>95% chance ?) that politics will remain as irrational and fantasy-based as it is now and as it will be in at least the upcoming 4 or 8 years.

Lewis' book has lots of other gems in it, for example, describing the impact of emotional states such as potential hope or regret on perceived experiences or reality. The human mind has many ways of distorting both reality and reason. This book makes that crystal clear using both real life anecdotes and descriptions of research by Khameman, Tversky and others. Given the role of human emotions, reality (including fact) is mostly personal and subjective, not mostly objective.

And, there's this nugget: "To Danny the whole idea of proving that people weren't rational felt a bit like proving that people didn't have fur. Obviously people were not rational, in any meaningful sense of that term."[3]

Questions: Is it true or at least plausible that children can be taught to self-question but adults cannot? If so, is there any point to even discussing this kind of science in the context of politics because adults are a lost cause?

Footnotes:
0. A personal guess as to why psychology had to stay dark ages until about the mid 1900's (1960s and later): (a) more wealth allowed more decisions that weren't just survival based (data shows that the more survival-critical a decision is, the more rational it usually is and poverty or near survival living focuses the mind on what's needed to survive), and (b) the rise of machines that could analyze much more data than people with just fingers and toes, an abacus or a slide rule.

1. The mind also is an impressive false reality-creating machine. In the context of driving a car: "The brain is limited. There are gaps in our attention. The mind contrives to make those gaps invisible to us. We think we know things we don't. . . . . It's that they [people] don't appreciate the extent to which they are fallible."

2. Given his rhetoric and animosity for (i) all that went before and (ii) truth, it seems more likely than not that Donald Trump will act to kill Obama's 2015 Behavioral Science Insights Policy Directive, which was based on work by Khaneman and Tversky as adopted for politics by Richard Thaler, a behavioral scientist and economist.

3. And this bizarre attack from an academic critic in the 1979 who felt that Khaneman and Tversky were being too pessimistic about human cognitive limitations. Lewis wrote: "The masses are not equipped to grasp Amos and Danny's message. The subtleties were beyond them. People needed to be protected from misleading themselves into thinking that their minds were less trustworthy than they actually were. 'I do not know whether you realize just how far that message has spread, or how devastating its effects have been'. . . . Even sophisticated doctors were getting from Danny and Amos only the crude, simplified message that their minds could never be trusted.** What would become of medicine? Of intellectual authority? Of experts?" Critics' fear was obvious and palpable. In the current political climate, the knowledge that Khaneman and Tversky generated will probably fall on deaf ears, or maybe even be subject to vicious post truth political attacks.

** That attack was typical - critics often exaggerated what Khaneman and Tversky kept saying explicitly in their publications, i.e., the mind isn't always wrong, but it is subject to errors and they are often systematic (not random), predictable and uncomfortably frequent.

B&B orig: 1/16/17

Book review: Moral Brains

An Epiphllum (leaf cactus) flower

Moral Brains: The Neuroscience of Morality bills itself as a brief review of the state of research into morality (Oxford University Press, 2016). If this is only a brief introduction, it is nonetheless brilliant. This is the first book this reviewer is aware of that shows how pure philosophical reasoning can effectively critique empirical science and point to new lines of research. The philosophers are up to speed on the empirical data and they powerfully integrate it with philosophy.

The book is edited by S. Matthew Liao at the Center for Bioethics at New York University. Liao’s book describes the current four major competing models of moral judgment. Some chapters are written by, and/or commented on by proponents of three of the four main models. Others directly critique one or more of the models and three chapters are rebuttals by the researcher credited with starting the neuroscience of morality or key proponents of one of the models. The book reviews 15 years of data and thinking about the neuroscience of morality. The authors are all thought leaders or highly respected in the field.

The book’s content focus includes considering emotion vs. reason, and philosophical lessons so far, with their implications for future research. This review can only hint at the richness, depth and clarity of the thinking expressed in Moral Brains. This short book review cannot do justice to what’s there.

The models: These models of moral judgment reflect the early state of moral neuroscience.
1. Emotion results from judgment:
Reasoning/unconscious rules → judgment → emotion
2. Emotion cause judgment: Emotion → judgment → reasoning
3. Emotion and reasoning cause judgment (dual inputs): Reasoning + emotion → judgment
4. Judgment contains emotion: Judgment containing emotion ↔ reason

Consideration of emotion or reason as the source of morality is ancient, but the modern debate is significantly framed by David Hume (1711-1776) and Immanuel Kant (1724-1804). Hume argued that reason is a “slave to the passions” and morality is bound up in emotion somehow. By contrast, Kant argued that morals are derived mostly from reason, usually thought of as conscious thinking. By the time one gets to the end of Liao’s book, it is clear that what’s the best of the four models is open to debate. Nonetheless, the balance of what Liao and the other authors have to say tips things (i) somewhat in favor of Kant and the ‘moral judgment contains emotion’ model, and (ii) modestly against the reason and then judgment causes emotion model.

Referring to brain scan data, Liao observes that “Every single neuroimaging study of moral cognition that I know concurs on one point: moral judgments regularly engage brain structures that are associated with emotional processing.” Obviously that isn’t proof, but it is consistent with some significant role for emotion.

The data against the emotions are moral judgment outputs seems rather convincing. According to author Jesse Prinz (chapter 1): “Numerous studies have shown that induced emotions can influence our moral judgments. . . . . happiness increases positive moral judgments and anger brings them down. The pattern of emotional impact is highly specific. Different emotions have distinctive and predictable contributions.” That makes emotions look at least as much like a moral judgment input as an output. Prinz is a key proponent of the judgment contains emotion judgment model.

A recurring Moral Brains theme questions if judgments based on conscious reason are more reliable or ‘truth seeking’ than emotion-based ones. That is open to debate. An interesting observation is that psychoactive drugs can change moral judgments.

Other insights include fairly convincing arguments and some evidence that reason isn’t only a conscious mental process. Previously many philosophers and scientists believed that reason was largely conscious (> 95% ?), but that belief is in question.

An assertion in Liao’s book is this by Walter Sinnott-Armstrong (chapter 14): “One of the most important lessons from the first decade of research in moral neuroscience is that morality is not unified in the brain or anywhere else.” Sinnott-Armstrong points out that, (i) morality isn’t located (unified) in any specific part of the brain, (ii) morality isn’t unified by content, e.g., it’s not just being about what’s right and wrong, and (iii) morality isn’t unified by its function, e.g., it’s not just being about using customs and values to guide social conduct.

At this point, the reader might see a contradiction: Liao says emotion-related areas of the brain are involved, but Sinnott-Armstrong says there’s no unity in terms of brain location. There is no contradiction. Although emotion processing centers may often (always?) be involved, there’s more to it than that. Other areas are likely also involved, e.g., as in the judgment contains emotion model where reason also influences moral judgment. To get to that belief, just consider the factor of time. Yes, people often make snap moral judgments. However, when given some time for reason and/or intuition, even a few minutes, moral judgments sometimes drift or change completely.

The neuroscience of morality probably still has at least 2-3 decades of research ahead of it before some basic issues begin to resolve into at least modest clarity. Maybe the most fundamental unanswered question is whether empirical neuroscience can ever lead to normative conclusions about what’s right and wrong. That’s a tough question. Is there a philosopher in the house?

NOTE: From this reviewer’s point of view, politics is more a matter of intuition-emotion and personal morals and identity than fact and logic. Reading Liao’s book reinforces that belief. It provides a current, broad knowledge basis for it. People interested in politics who read this book will easily see direct relevance to real world politics and politicians.

'Generous Gift' hybrid

B&B orig: 5/15/17

Book review: Crystallizing Public Opinion



“It is manifestly impossible for either side in [a political] dispute to obtain a totally unbiased point of view as to the other side. . . . . The only difference between ‘propaganda’ and ‘education’, really, is in the point of view. The advocacy of what we believe in is education. The advocacy of what we don’t believe in is propaganda. . . . . Political, economic and moral judgments, as we have seen, are more often expressions of crowd psychology and herd reaction than the result of the calm exercise of judgment.” Edward Bernays, Crystallizing Public Opinion, 1923

“Intolerance is almost inevitably accompanied by a natural and true inability to comprehend or make allowance for opposite points of view. . . . We find here with significant uniformity what one psychologist has called ‘logic-proof compartments.’ The logic-proof compartment has always been with us.” Edward Bernays, Crystallizing Public Opinion, 1923

“The relativity of truth is the commonplace to any newspaperman, even to one who has never studied epistemology; and, if the phrase is permissible, truth is rather more relative in Washington than anywhere else. . . . . most of the news that comes out of Washington is necessarily rather vague, for it depends on assertions of statesman who are reluctant to be quoted by name, or even by description.” Edward Bernays, Crystallizing Public Opinion quoting Elmer Davis in his book, History of the New York Times, 1921

“The public and the press, or for that matter, the public and any force that modifies public opinion, interact. . . . . The truth is that while it appears to be forming public opinion on fundamental matters, the press is often conforming to it. . . . . Proof that the public and the institutions that make public opinion interact is shown in instances in which books were stifled because of popular disapproval at one time and then brought forward by popular demand at a later time when public opinion had altered. Religious and very early scientific works are among such books.” Edward Bernays, Crystallizing Public Opinion, 1923

Book review: Edward Bernays (1891-1995), nephew of Sigmund Freud, coined the term “public relations.” He advocated use of shrewd, sophisticated, science-based propaganda to both conform to and shape public opinion to sell products and ideas. Bernays arguably was among the 30 most influential but least well known Americans of the 20th century. He was instrumental in establishing public relations as a necessary component of commercial, political and other important interests in building acceptance of what the PR person’s client was selling.

Products Bernays helped sell in his lifetime ranged from consumer products, commercial ideas and a stage play designed to inform the public about a serious public health issue (syphilis) to coaxing Americans into a patriotic fervor about, and support for, entry into World War I. Consumer products he successfully sold included bacon, hair nets and silk. Commercial ideas he successfully sold included public support for private ownership of electric utilities and, against a prevailing public belief that jewelry was useless, public acceptance of the idea that jewelry was really valuable and desirable. One commentator credited Bernays with being a key influencer in the conversion of the American public’s mind set from one of needs-based, buy only what you need, to one of desires-based, buy what you want.



In coaxing the American public into accepting entry into World War I, Bernays worked for the U.S. Committee on Public Information, a federal government propaganda agency dedicated to building American public support for the war. Before then, Americans were skeptical about entering the war. After realizing how amazingly successful this propaganda effort was in changing public opinion in both the US and Britain, Bernays realized that since science-based propaganda could be used to sell political ideas, it should also work for consumer and commercial products and ideas.

Bernays was right.

In his 1923 book, Crystallizing Public Opinion, Bernays lays out his argument that propaganda and public relations were both critical and good in democratic governance. People who strongly shaped Bernays’ thinking included his uncle, Freud, social psychologist Wilfred Trotter who coined the term ‘logic-proof compartment’ and authored the 1916 book, The Instincts of the Herd in Peace and War, British political scientist and social psychologist Graham Wallas (Human Nature in Politics, 1908) and the reporter and political commentator Walter Lippmann (a socialist who invented the concept of ‘stereotype’ as it is now understood in modern psychology) who has been called the ‘Father of Modern Journalism’ by some commentators.[1]

Bernays professed to hold as a core concept the role of ethics in propaganda. Until the end of his life, he never felt that propaganda was a means to deceive, but instead was to inform or educate, thereby shaping public opinion. He never wavered in his belief that he was always on the side of good and right. Among other things, his later book Propaganda (1928) was his attempt to rehabilitate the term propaganda from synonymous with deceit and lies to its original meaning of educating. Ironically, Bernays’ work for the U.S. Committee on Public Information (CPI) was part of what helped lead the US public to think that propaganda meant deceit and lies. That meaning still prevails today.

In the introduction to Crystallizing Public Opinion by Stewart Ewen (2011), Ewan observes that “In many ways, the experiences of the First World War challenged many mainstream intellectuals’ faith in the possibility of direct democracy.[2] The propaganda efforts of the CPI reinforced a growing belief that ordinary men and women were incapable of rational thought. For democracy to work effectively, public opinion needed to be guided by what historian Robert Westbrook has characterized as ‘enlightened and responsible elites.’”

As Bernays alludes to in Crystallizing Public Opinion, basic definitions can be basically impossible to articulate. Thus, what’s an ‘enlightened and responsible elite’ to one person can easily be an uninformed and irresponsible dolt to another.

Nuts and bolts: Crystallizing Public Opinion is a short, easy to read book (155 pages). This book review is based on the edition with an excellent 30 page introduction by Stuart Ewen (2011). For anyone interested in politics and the science of politics, this book is highly recommended. It provides an outstanding history and context for modern American politics and commerce in the words of a key influencer.



Footnote:
1. Lipmann was pivotal in convincing president Wilson to establish the Committee on Public Information, which rejected the term propaganda. The CPI considered it's content to be educational and based on facts with no other argument involved. History has shown that self-delusion to be blatantly false. Lipmann worked with Bernays on the CPI.

2. It’s not clear if Ewen really means true direct democracy in the old Athens Greece sense or whether he refers to American indirect democracy.

B&B orig: 5/18/17

Book review: The Political Mind



CONTEXT: Dissident Politics advocates a pragmatic brand of politics that is focused on applying less biased versions of facts and logic in service to a competition of ideas-based vision of political morals and the public interest. The point was to see if it was possible to develop a plausible science-based ideology that is more rational and conscious reason-driven than existing ideologies. Conceptions of dominant American ideologies, e.g., liberalism, conservatism, socialism and capitalism, are based primarily on unconscious, reflexive and intuitive-emotional-moral perceptions of reality and thinking that distorts fact and logic. The pragmatic ideology concept arose mostly from personal observations of American politics and study of the biology of politics, mainly cognitive and social science research on politics and human cognition. Although the pragmatic ideology was internally consistent and logically defensible, cognitive and social science kept pointing to an astonishing weakness of objective fact and logic as (i) persuasive, and (ii) as a rational core for any political ideology. That disconnect prompted more study of the modern cognitive and social science of politics. The Political Mind was part of that effort.

BOOK REVIEW: Cognitive linguist George Lakoff wrote The Political Mind: A Cognitive Scientist’s Guide To Your Brain And It’s Politics, which published in 2008 and 2009 (Penguin Books, New York, NY). Lakoff’s central hypothesis argues that reliance on “Old Enlightenment” (OE) visions of conscious reason (fact- and logic-based) is detrimental in defense of democratic values.

Lakoff argues that OE incorrectly assumes that reason is, among other things, conscious, universal (same for everyone), logical (consistent), unemotional, self-interested and literal or disembodied where mind logic fits world logic. Instead, reason is unconscious and emotion-dependent, inconsistent, embodied and not universal. He argues that unconscious thought itself is reflexive (automatic and not consciously uncontrolled), while conscious thought is reflective (consciously uncontrolled).

Lakoff’s argument that disembodiment of reason seems to cast doubt on pure logic as a persuasive source of moral authority if one assumes that people’s cognitive biology cannot be overcome.

Lakoff is a staunch liberal. He sees the rise of conservative messaging and political influence as a direct and profound threat to American democratic values and the moral mission of government, which is protecting and empowering the public. According to Lakoff, “the radical conservative political and economic agenda is putting public resources and government functions into private hands, while eliminating the capacity of government to protect and empower the public. . . . The Old Enlightenment reason approach not only fails, it wastes effort, time and money.” In other words, facts alone are ineffective.

He goes on to explain: “Politics is about moral values. . . . . Most of what we understand in public discourse is not in the words themselves, but in the unconscious understanding that we bring to the words. . . . . our systems of concepts are used to make sense of what is said overtly. . . . . The very use of the left-to-right scale metaphor serves to empower conservatives and marginalize progressives. . . . The left-to-right scale metaphor is not harmless. It is politically manipulated to the disadvantage of American democratic ideals.”

There is no ambiguity about Lakoff’s politics. He explains at length the power of framing issues in progressive and conservative frames to influence progressive and conservative modes of thinking. His core argument is that when a progressive accepts a conservative frame of an issue, the progressive is at a disadvantage, or maybe even concedes the issue to the conservative point of view. Framing examples that Lakoff cites include viewing illegal immigration as a matter for conservatives of dealing with “illegal immigrants”, while it ought to be progressively framed as a matter of illegal employers and/or consumers. Similarly, health care isn’t a conservative matter of health care “insurance”, but instead it’s a progressive matter of government’s central moral role in protecting and empowering its citizens.[1]

Based on the science, Lakoff argues that American politics amounts to a competition for minds based on messaging to or leveraging two fundamentally different progressive and conservative moral modes of thinking. Those thought modes are based, among other things, on different sets of moral beliefs and personal social identity. The core progressive moral value is empathy and what flows logically from it. As applied to government, Lakoff argues that empathy underpins democratic values of protection and empowerment of citizens. His vision of the conservative view is that fact and logic play a far less important role than is the case for progressives. That implies, for whatever reasons, relatively more reliance on fact- and logic-based conscious reason leads to better politics and outcomes than less reliance.

Where progressives fail is in their failure to abandon OE conceptions of reason fact and logic and to embrace a New Enlightenment (NE) conception of reason that accounts for the cognitive biology of political and moral thought. Lakoff’s vision of NE holds that it is rational (conscious), embodied, emotional, empathetic, metaphorical and only partly universal. NE reason (1) incorporates emotion that’s structured by frames, metaphors, images and symbols, and (2) requires a new philosophy of morality and politics because the brain isn’t neutral or a general purpose computer. Human cognition is severely limited to what it can make sense of. Much of what is perceived is filtered through frames, metaphors and symbols to simplify the cognitive load of making a complex world fit into a specific personal understanding of the world. In short, everyone’s reality is different, in significant part because their morals are different.

Questions: Is Lakoff’s argument persuasive that “there are no moderates” and the only modes of political thinking that exist are either progressive or conservative for any given issue?* If that’s true, how can one account for the pragmatic, not progressive and not conservative mind set reflected by superforecasters that cognitive scientists have detected among a few otherwise normal people (maybe 0.1% to 0.01% of the adult human population)? Is B&B barking up the wrong tree by downplaying emotion and relying on the OE vision of reason, fact and logic, e.g., the evidence is that objective fact and logic are not effective persuaders? Should fact- and logic-based conscious reason in politics lead to better outcomes in the long run? If so, why, and if not, why not?

* Lakoff argues that people are rarely or never all progressive or conservative in thinking about all issues. For some issues progressive thinking dominates, while conservative thinking dominates for other issues.



Footnote:
1. Lakoff observes that about one-third of private health care cost is for profit and administration; Medicare spends 3% on administration and none on “profiteering”. He cites a short taped conversation between President Nixon and his aide John Ehrlichman regarding a new trend among health care insurers. The gist of the conversation:
Ehrlichman: Incentives favor less medical care; the less care they give, the more money they make.
Nixon: Fine.
E: The incentives run the right way.
N: (admiringly) Not bad.
Lakoff argues that here, Nixon was identifying with the conservative morals of individual responsibility (be prosperous) and making money any legal way, i.e., raising barriers to health care to increase profit. From that moral point of view, it was a great idea. The progressive moral of empathy and protection for consumers wasn’t part of the thinking. Lakoff argued that’s not a matter of callousness by Nixon, but instead it’s a matter of differing morals shaping unconscious thinking and beliefs.

B&B orig: 5/26/17

Book Review: How Propaganda Works



Jason Stanley's[1] book, How Propaganda Works (Princeton University Press, 2015) starts by observing that there are good reasons to believe that liberal democracies such as the US generally do not exist. The logic dates back to Plato (424-348 BCE). He argued that liberal democracies value freedom and therefore they cannot or will not ban free speech. Plato reasoned that because free speech protects propaganda, liberal democracies empower demagogues who will come to rule, in significant part by shrewd use of propaganda. Due to the natural rise of propaganda-empowered demagogues, Plato considered democracy the worst form of government. In time, demagogues in power become democracy-crushing tyrants and democracies have no defense against that political outcome.

Instead of democracy, which is naturally inefficient and corrupt, Plato argued that virtue- and justice-based rule by philosopher guardians or kings in service to society and the public interest is the best form of government. That form of government amounts to something akin to an aristocracy or monarchy based on the leaders' rare philosophical merit, including the leaders' unusual capacity to apply reason or logic to governance, policy and society's needs.

Others see propaganda, including fake news and bogus logic, as threats to governance and societies. For example the World Economic Forum's Global Risks 2013 report cites the viral spread of false or baseless information as a risk on a par with terrorism. Plato's student, Aristotle came to believe that democracy was the least worst form of government. Stanley comments that ". . . . even Aristotle recognized (in Politics, book 5, chapter 5) that democracy's flaw, the particular instability it faces, comes from 'demagogues' who alternately 'stir up' and 'curry favor' with the people. Aristotle clearly recognized that a chief danger to democracy was flawed ideology and demagogic propaganda."

Stanley asks if the cherished value of free speech constitutes democracy's fundamental flaw, a key source of instability and a well-known, direct route to tyranny.

Political propaganda (PP) defined: Stanley defines PP as "the employment of a political ideal against itself." He identifies two kinds of PP, one being "supporting propaganda" that supports a good, bad or neutral democratic ideal by appeal to and overloading humans' amazingly puny rationality and logic bandwidth. However, the demagogue's ultimate political goal is to favorably influence opinion by appeal to emotion such as fear or other cognitive biases. Appeal to human emotion and bias tends to shut down conscious reason. In turn, that tends to close off conscious consideration of other possibilities the demagogue wants to avoid.

By contrast, "undermining propaganda" involves a contradiction between the demagogue's professed democratic ideal and his real goal. Specifically, undermining propaganda is a public appeal in support of a democratic ideal, but the demagogue's real goal is to limit the ideal's realization. An example is a demagogue's appeal to the American ideal of expanding liberty by arguing that tax cuts are necessary to expand personal liberty, but where evidence shows that tax cuts may reduce liberty for more people than benefit.

Stanley points out that PP demagoguery can convey truth or falsehoods and that it can be sincere or insincere. Stanley argues that PP in totalitarian states tends to be rather open and thus not taken seriously by its citizens, while in democracies it is disguised and usually not recognized as political propaganda.



Stanley makes a number of observations:
1. "Flawed ideologies characteristically lead one to sincerely hold a belief that is false and that, because of its falisty, disrupts the rational evaluation of a policy proposal . . . . [citing Hume] . . . . a flawed ideological belief leads to 'an unwillingness to amend immediate judgment in light of reflection.'"
{comment: That sounds a lot like what other social scientists say about voters in democracies, e.g., Christiopher Achen and Larry Bartels.}

2. "Lying too is a betrayal of the rational will. But it is a different kind of betrayal of the rational will than propaganda. At least with lying, one purports to provide evidence. Propaganda is worse than that. It attempts to unify opinion without attempting to appeal to our rational will at all. It bypasses any sense of autonomous decision. . . . . [citing Chomsky] . . . . a more nuanced version is . . . . propaganda as 'biased speech.' Propaganda is speech that irrationally closes off certain options that should be considered."
{comment: Again, we see an argument that emotion's impact on human cognitive biology, perceptions of reality and thinking make PP, lies, deceit and etc. powerful persuasive tools in the hands of skilled artisans such as successful demagogues and tyrants. We also see here the ancient argument, e.g., it was Plato's central belief, that conscious reason is superior to unconscious (implied irrationality) thinking. Modern science suggests a combination of unconscious thinking plus conscious reason to reduce unconscious bias is the optimal 'rational' mind set, e.g., Philip Tetlock's discovery of superforecasters, who are in essence, pragmatic non-ideologues capable of harnessing both unconscious and conscious thought to see things that others cannot.}

3. ". . . . one central source of ideological belief is our social identities. . . . . revision of flawed ideological belief whose source is flawed social structure is very hard . . . . Because of this, I am skeptical about the search for a psychological strategy individuals can use to 'protect themselves' . . . . what is needed to eliminate problematic ideological belief is to change the practice of a large group of people simultaneously over time, to alter a social identity many people share.
{comment: Other than through mass public education, B&B sees no way to change social identity on a mass scale. Listening to mainstream partisan political rhetoric, particularly conservative rhetoric, it is obvious that there's a whole lot of PP going on all the time. B&B argues that, unless some major catastrophe befalls a society, such changes are necessarily incremental and generational. B&B believes that what Stanley is talking about here will come about only from public education about human cognitive and social biology that, in essence, teaches children the psychology and cognitive biology they can use to at least partially 'protect themselves' from demagogues and their PP.}

4. ". . . . democracy functions as an ideal. . . . . the [ ] conception of norm guidance as faith is too problematic to be adopted. The problem is that faith in democratic ideals leads us to blindness to their violations. . . . . perhaps a reasonable way to adhere to ideal deliberative norms, for example, the norm of objectivity, may be to adopt systematic openness to the possibility that one has been unknowingly swayed by bias. If so, the mark of a democratic culture is one in which participants in debates regularly check themselves for bias, and subject their own beliefs and unthinking use of language to the same critical scrutiny as they do to the beliefs and utterances of others."
{comment: Couldn't agree more. This is precisely what B&B and its predecessors, e.g., Dissident Politics, have argued for years.}

5. "Undermining propaganda . . . . . depends on people having beliefs that are resistant to the available evidence . . . . [and it] . . . . conceals a contradiction of sorts, the beliefs that are resistant to evidence must themselves must be flawed in some way."
{comment: Just let that sink in for a minute. This is a critically important point. Beliefs resistant to evidence are a significant part of the political ideologue's cognitive biology. Just knowing that one thing ought to inject some doubt, but that rarely happens to any meaningful extent for most (>90% ?) ideologues.}

Questions: Is Donald Trump a demagogic propagandist with aspirations to tyranny? Is America engulfed in propaganda, along with an ocean of lies, BS and other forms of deceit? If so is the propaganda, lies, BS and deceit mostly from progressives, conservatives, both or something else? If it's mostly from one one side, which one is it? It is possible that public education can arm our children with knowledge that they can use to become self-aware and at least partially resistant to propaganda, lies, BS and deceit?

Footnote:
1. Stanley is professor of philosophy at Yale University. He is a staunch progressive. He clearly views PP through that lens and the world view it creates in his mind. His thesis holds that substantial material inequality, just or not, leads to flawed ideology, the existence of which makes demagogic propaganda more persuasive to people on the long end of the stick. In turn, that undermines democracy by creating and maintaining inequalities.

B&B orig: 5/31/17

Book Review: Behave



Robert Sapolsky, Behave: The Biology of Humans at Our Best and Worst: “. . . . when the frontal cortex [conscious reason] labors hard on some cognitive task, immediately afterward individuals are more aggressive, less empathic, charitable and honest. Metaphorically, the frontal cortex says, ‘Screw it. I’m tired and don’t feel like thinking about my fellow human.’”

“We implicitly divide the world into Us and Them, and prefer the former. We are easily manipulated, even subliminally and within seconds, as to who counts as each. . . . . ‘Me’ versus ‘us’ (being prosocial within your group) is easier than ‘us’ versus ‘them’ (prosociality between groups).”


SUMMARY: In Behave: The Biology of Humans at Our Best and Worst (Penguin Press, 2017), author Robert Sapolsky (https://en.wikipedia.org/wiki/Robert_Sapolsky) looks broadly at the collective impacts of what is known about genetics, endocrinology, neuroscience, psychology, anthropology, culture, society, history, evolution and laws of nature on the biology of human behavior. He asks an interesting question: Is enough known to reasonably support an evidence-based belief that humans can progress in terms of more peace, freedom and prosperity, with less war, oppression and poverty? In essence, can the human species learn enough to help survive its self-destructive tendencies? Because of complication and uncertainty, Sapolsky’s answer is an unsettling maybe.

REVIEW: Sapolsky is a neuroendocrinologist and a professor of biology, neurology and neurological sciences at Stanford University. Behave is written for a lay audience and easy to read. It is a long book (675 pages plus 38 pages of appendices on neuroscience, endocrinology and proteins) that covers a large amount of relevant information from many species. Complicated concepts are explained clearly with modest use of carefully explained technical jargon. Sapolsky’s mindset is holistic. He integrates the various influences on human behavior, e.g., culture, age, and hormone status, more than other authors have done to date. In considering sources of human behavior he rejects categorical simple-cause thinking, e.g., gene X caused this behavior, but hormone Y caused that behavior, while religion Z caused yet another behavior, and by golly, that war 1300 years ago caused yesterday’s bloody mess. The evidence says that all influences are relevant, including what happened in the second, minutes, hours, days and centuries before someone does something. Sapolsky describes behavior and the science by referring to those various time frames and their relevance.

Nature vs. nurture: A repeated theme is interplay between nature and nurture. Evolution provides a mechanism to insure that the influence of genes is lessened by allowing time for a key part of the brain to experience life before maturing. The human frontal cortex, significantly responsible for moral thinking and decisions, is the last part of the brain to mature. That part of the brain is done growing up by the mid-twenties. Existing evidence suggests that this long maturation time limits the impact of genes on adult thinking and behavior.

Once mature, frontal cortex functioning appears to be more influenced by experience, culture and family. The frontal cortex significantly shapes, among other things, adult decision making, risk-taking, morals and identity. The absence of activity by the adolescent frontal cortex underpins many teenage behaviors, frustrating, good, bad, dumb, weird, brilliant and bizarre. Other brain regions that affect behavior are mature in adolescents, but their impact isn’t modulated like it is in adults with a mature frontal cortex.

Another repeated theme is rapid, unconscious characterization of people into US and Them. Humans and many animals share this trait in some form or another. The trait exists in human infants. Mental discrimination of people into groups is based on everything from race, gender and kinship to meaningless traits. This trait has been exploited for millennia by politicians and warlords to create divisions among groups of people, even when the actual differences are insignificant. This is happening in spades in American politics today. It is a source of bad behavior.

Despite his obvious command of science in various fields, Sapolsky does not articulate a clear path to peace and better behavior. His admonition is for forgiveness and tolerance. The most hopeful lesson comes from human history, which generally reflects a capacity of cultures to improve over time. It is not yet known if knowledge of human cognitive and social biology can make a meaningful difference in fostering progress. That’s disappointing.

Despite Sapolsky’s obvious optimism for a better world through science, he leaves a clear impression that it is still too early to know how to extrapolate from science to culture and society:

“It’s complicated. Nothing seems to cause anything; instead everything just modulates something else. . . . . Eventually it can seem hopeless that you can actually fix something, can make things better. But we have no choice but to try.”

At least there is a reason for hope in the uncertainty. Maybe someday knowledge of human behavioral biology can be translated into social good.

B&B orig: 9/16/17