Etiquette



DP Etiquette

First rule: Don't be a jackass.

Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.

Friday, August 9, 2019

China’s Deep Surveillance State is Awakening

Chinese policewoman using facial-recognition sunglasses linked to artificial intelligence data analysis algorithms while patrolling a train station in Zhengzhou, the capital of central China's Henan province

The Independent reports that China’s national public surveillance system is starting to bear fruit in its quest for law and social order: “Chinese police have used facial recognition technology to locate and arrest a man who was among a crowd of 60,000 concert goers.” The man was accused of ‘economic crimes’. Facial recognition cameras were set up at the entrances to the concert.

This isn't the first time China’s surveillance system has been able to find people sought by Chinese authorities. For example, 25 suspects using facial recognition were arrested at a beer festival last August.

China openly describes itself as a world leader in facial recognition technology. Chinese citizens are regularly reminded that the growing surveillance system makes it nearly impossible to evade authorities. Presently, China’s system employs 170 million CCTV cameras with another 400 million to be installed in the next three years.

It is easy to see that thins kind of technology would be appealing to tyrannies the world over. It may even be of more than passing interest to more than just tyrannies. The question is whether this kind of technology, coupled with other surveillance methods such as tracking of cell phone purchases, cell phone conversations, and cell phone locations, can form the basis to build a Thousand Year Reich that really could last 1,000 years or more. As it stands now, the Chinese people are rapidly and willingly abandoning cash for buying and using electronic purchases via cell phones and online instead.

It is hard to see how a person could remain out of sight in a country with about 470 million cameras constantly monitoring everything in the camera’s range of vision. This kind of a deep surveillance state really is starting to look like a new normal for at least the roughly 1.4 billion Chinese citizens.



B&B orig: 4/23/18

Wednesday, August 7, 2019

Book Review: The Myth of the Rational Voter

“I am suspicious of all the things that the average citizen believes.” -- H. L. Mencken

In The Myth of the Rational Voter: Why Democracies Choose Bad Policies (Princeton University Press, 2007), author Bryan Caplan looks at evidence of voter rationality from an economist’s point of view. What Caplan finds in the data is a consistent difference of opinion between professional economists (econs) and non-economists (non-econs). Caplan starts with survey data related to opinions about factors that affect the economy. Econs and non-econs rarely agree. Caplan asks why there’s such a consistent difference and what effect that can have for democracy.



Chapter one opens with Caplan’s observation that “What voters don’t know would fill a university library.” After that, things get interesting. Caplan raises the defense of voter ignorance called “rational ignorance.” Econs like the idea of rational ignorance because econs want to believe (are biased to believe) that people think and act rationally. That idea posits that it is rational for voters to be ignorant because their vote has no impact on election outcomes. After all, major elections are never decided by a single vote.

The logic behind that false vision of reality holds that “democracy can function well under almost any magnitude of voter ignorance.” The flaw in that logic assumes that voters don’t make systematic errors. The econ’s bias was that random errors are random and therefore mistakes in voting cancel each other out.

Systematic errors abound: It turns out that voters’ errors are far from random. Voter errors are usually systematic, not random. The argument that ignorance is rational turns out to be irrational. Voter errors may not affect a voter personally or in a way that they can directly see or feel. Nonetheless, voter misunderstandings do have demonstrable adverse impacts on societies. Regarding bad policy choices that voters can generate, Caplan puts it like this: “When a voter has mistaken beliefs about government policy, the whole population picks up the tab.”

Voters perceive realities through a lens of pervasive reality-distorting biases that underlies much of the difference of opinion between econs and non-econs. They include one or more of four major biases that tend to affect public opinion on most economic issues. The four biases are:
(1) an anti-market bias that causes people to underestimate market forces’ capacity to harmonize private greed with the public interest,
(2) an anti-foreign bias that causes underestimation of the benefits of foreign trade and immigration,
(3) a make-work bias that overestimates the adverse impacts of labor-saving technology and automation, and
(4) a pessimistic bias that leads to underestimation of current economic conditions, often expressed as a nostalgia for earlier times with conditions not as good as people usually imagine they were.



Economists have not been completely ignorant of systematic public biases about economic issues. The pre-industrial revolution economist and philosopher, Adam Smith, observed that “science is the great antidote to the poison of enthusiasm and superstition.” These biases are old and they run deep and across cultures.

Caplan acknowledges a problem. There is deep public resistance to disquieting knowledge, e.g., the destructive existence and power of the four biases. That kind of knowledge undermines personal beliefs and most people flatly reject it or rationalize it into insignificance. Regarding the make-work bias against automation, Caplan observes: “These arguments [in favor of automation] sound harsh. That is part of the reason they are so unpopular; people would rather feel compassionately than think logically.”

CRITICISM 1: Caplan is aware of and addresses common criticisms of economists and their opinions. He acknowledges that expert econs can be biased and can be wrong when the non-econ public is right. He also observes that both can be wrong, but that both can’t be right. Caplan points to public resistance to what social science is telling the public about human cognition and he fully expects the same flames of public rejection to scorch econs and their opinions.

CRITICISM 2: Two common beliefs about econ bias holds that econs express a self-serving bias because they are (1) privileged, well-off academics with protected jobs, and/or (2) ideologically biased in favor of businesses and wealthy people. Caplan goes through the data and finds that the econ bias can account for at most 20% of the difference of opinion between econs and non-econs. If there is systematic bias among econs, it isn’t the major source of opinion differences. The data argues that, if anything, econs are less far right in their political ideology than non-econs. And, the data shows that non-econs with increasing information or knowledge express usually opinions closer to econ opinion. Knowledge, or lack thereof, explains more of the econ vs. non-econ opinion split than systematic econ bias.

Caplan includes an appeal for economists to drop their indefensible lingering disbelief in irrationality and get on with accepting and dealing with the reality that the concept of the rational voter is a myth.

B&B orig: 9/26/16

Book Review: The Undoing Project

Author Michael Lewis' book The Undoing Project: A Friendship That Changed Our Minds (W.W. Norton & Co., 2017) describes the collaboration between Israeli psychologists Daniel Khaneman and Amos Tversky. In 2002, Khaneman won a Nobel prize in economics for his contribution to decision theory. To a large extent, their work transformed the professionalism of psychology and forced it's influence to the center of economics.

Given the generality of their work on human cognition, thinking and decision-making, it is reasonable to expect that their work will heavily influence research in many other areas of human activity over time. Whether the new knowledge will translate to American society and its thinking and behavior appears to be very unlikely for the foreseeable future.

Daniel Khaneman

For anyone interested in politics, the question of how the field of psychology went from mostly nonsense to relevant, serious science that could no longer be ignored by the 1980s makes this book well worth the money and time. The book is written for a general audience and is an easy read. It is light on technical details but nonetheless clearly conveys the state of psychology and cognitive biology and how that moved from the end of the dark ages in the 1900s to core modern relevance.

The book's central theme revolves around the intense academic relationship between two basically incompatible geniuses. Tversky was an organized but arrogant, optimistic and self-confident master of mathematical psychology. By contrast, Khaneman was disorganized, pessimistic and riddled with self-doubt, but he did have an amazing capacity to see core problems in psychology (quirks of human thinking and behavior) that the rest of the field simply could not see. Khaneman's creative insights, and his ability to articulate and experimentally get at the root of a problem were, and probably still are, astounding. Tversky's capacities were similar.

Eventually their academic relationship came to a prolonged, unpleasant end. Tversky died in 1996 of cancer, some years thereafter. Khaneman is professor emeritus at Princeton.

The book's title, The Undoing Project, refers to the effort of the two scientists to"undo", among other things,
(i) the then-dominant 'utility theory' of decision making that dominated and underpinned economic theory and belief; and
(ii) the human mind's intense desire to, and ease of, erasing (undoing) "what was surprising or unexpected."

The rational man: One area their research profoundly affected was economics and its 1700s-vintage utility theory. The theory was based on the assumption that people were usually rational in the economic decisions they made. Khaneman-Tversky research found that wasn't true.[0] One source of systematic error was a human cognitive trait of a common 'belief in small numbers'. They found that people, including professional statisticians and experimental psychologists who should know better, often drew conclusions from amounts of evidence that are too small to draw any conclusions from. The data was clear that "people mistook even a very small part of a thing for the whole." The normal human belief is that ANY sample of a large population was more representative of the population than it really was. Humans simply did not evolve to think in terms of statistics.

Heuristics: Tversky and Khaneman's research identified four basic rules (heuristics) the human mind uses to help make decisions, even when there is uncertainty of an unknowable degree. In essence, the human mind is a pleasure machine.[1] People's biological desire to avoid a loss is greater than their desire to secure a similar gain. From an evolutionary point of view, that makes sense. During evolution, people who underestimate risk tended to get eliminated from the gene pool.

Amos Tversky

The blow back: Khaneman and Tversky lost faith in decision analysis in the context of wars that Israel fought. Khaneman expressed the problem in public talks he called "Cognitive Limitations and Public Decision Making." Affecting decision making was their attempt to inject the implications of their research into high-stakes, real world decision making and government. They tried to do that by forcing experts on decision making to assign odds of events of all possible outcomes, e.g., war, peace, border skirmishes or attacks by less than all adversaries all at once.

In practice, the exercise failed. Despite their successful efforts to get Israeli intelligence agencies and politicians to understand scenarios in terms of probabilities, the data and analysis fell on deaf ears. Specifically, Israeli intelligence estimates gave a 10% increased of risk of another war if Henry Kissinger's peace efforts with Syria failed. Despite the warning, Israeli foreign minister Yigal Allon wasn't impressed and didn't work to bolster Kissinger's peace efforts. Khaneman said "That was the moment I gave up on decision analysis. No one ever made a decision because of a number. They need a story. . . . the understanding of numbers is so weak that they don't communicate anything. Everyone feels that those probabilities are not real -- that they are just something on somebody's mind."

Lewis puts it like this: "He [Allon] preferred his own internal probability calculator: his gut."

One bright spot - the young: Both Tversky and Khaneman had taught the biology of judgment to elementary or high school students and the two wrote in an unpublished manuscript that "we found these experiences highly encouraging." Lewis writes: "Adult minds were too self-deceptive. Children's minds were a different matter."

Khaneman wrote: "We have attempted to teach people at various levels in government, army, etc. but achieved only limited success."

Under the current retrograde political conditions, the public schools option seems to be the ONLY path to possibly injecting this new knowledge into mainstream American politics and society.

The lost cause: Post truth politics: Unfortunately, the impact of the new knowledge of human cognition and social behavior on politics is weak. It's not non-existent, but current political conditions strongly disfavor rationality. There's a faint pulse, at least for now, but it will be easy to kill.[2]

For decision making based on modern cognitive and social biology, the obvious and probably only path to possibly reach that lofty goal is to require at least one semester, probably two, of instruction in human cognitive and social biology for all public schools. Absent that, it's highly likely (>95% chance ?) that politics will remain as irrational and fantasy-based as it is now and as it will be in at least the upcoming 4 or 8 years.

Lewis' book has lots of other gems in it, for example, describing the impact of emotional states such as potential hope or regret on perceived experiences or reality. The human mind has many ways of distorting both reality and reason. This book makes that crystal clear using both real life anecdotes and descriptions of research by Khameman, Tversky and others. Given the role of human emotions, reality (including fact) is mostly personal and subjective, not mostly objective.

And, there's this nugget: "To Danny the whole idea of proving that people weren't rational felt a bit like proving that people didn't have fur. Obviously people were not rational, in any meaningful sense of that term."[3]

Questions: Is it true or at least plausible that children can be taught to self-question but adults cannot? If so, is there any point to even discussing this kind of science in the context of politics because adults are a lost cause?

Footnotes:
0. A personal guess as to why psychology had to stay dark ages until about the mid 1900's (1960s and later): (a) more wealth allowed more decisions that weren't just survival based (data shows that the more survival-critical a decision is, the more rational it usually is and poverty or near survival living focuses the mind on what's needed to survive), and (b) the rise of machines that could analyze much more data than people with just fingers and toes, an abacus or a slide rule.

1. The mind also is an impressive false reality-creating machine. In the context of driving a car: "The brain is limited. There are gaps in our attention. The mind contrives to make those gaps invisible to us. We think we know things we don't. . . . . It's that they [people] don't appreciate the extent to which they are fallible."

2. Given his rhetoric and animosity for (i) all that went before and (ii) truth, it seems more likely than not that Donald Trump will act to kill Obama's 2015 Behavioral Science Insights Policy Directive, which was based on work by Khaneman and Tversky as adopted for politics by Richard Thaler, a behavioral scientist and economist.

3. And this bizarre attack from an academic critic in the 1979 who felt that Khaneman and Tversky were being too pessimistic about human cognitive limitations. Lewis wrote: "The masses are not equipped to grasp Amos and Danny's message. The subtleties were beyond them. People needed to be protected from misleading themselves into thinking that their minds were less trustworthy than they actually were. 'I do not know whether you realize just how far that message has spread, or how devastating its effects have been'. . . . Even sophisticated doctors were getting from Danny and Amos only the crude, simplified message that their minds could never be trusted.** What would become of medicine? Of intellectual authority? Of experts?" Critics' fear was obvious and palpable. In the current political climate, the knowledge that Khaneman and Tversky generated will probably fall on deaf ears, or maybe even be subject to vicious post truth political attacks.

** That attack was typical - critics often exaggerated what Khaneman and Tversky kept saying explicitly in their publications, i.e., the mind isn't always wrong, but it is subject to errors and they are often systematic (not random), predictable and uncomfortably frequent.

B&B orig: 1/16/17

Book review: Moral Brains

An Epiphllum (leaf cactus) flower

Moral Brains: The Neuroscience of Morality bills itself as a brief review of the state of research into morality (Oxford University Press, 2016). If this is only a brief introduction, it is nonetheless brilliant. This is the first book this reviewer is aware of that shows how pure philosophical reasoning can effectively critique empirical science and point to new lines of research. The philosophers are up to speed on the empirical data and they powerfully integrate it with philosophy.

The book is edited by S. Matthew Liao at the Center for Bioethics at New York University. Liao’s book describes the current four major competing models of moral judgment. Some chapters are written by, and/or commented on by proponents of three of the four main models. Others directly critique one or more of the models and three chapters are rebuttals by the researcher credited with starting the neuroscience of morality or key proponents of one of the models. The book reviews 15 years of data and thinking about the neuroscience of morality. The authors are all thought leaders or highly respected in the field.

The book’s content focus includes considering emotion vs. reason, and philosophical lessons so far, with their implications for future research. This review can only hint at the richness, depth and clarity of the thinking expressed in Moral Brains. This short book review cannot do justice to what’s there.

The models: These models of moral judgment reflect the early state of moral neuroscience.
1. Emotion results from judgment:
Reasoning/unconscious rules → judgment → emotion
2. Emotion cause judgment: Emotion → judgment → reasoning
3. Emotion and reasoning cause judgment (dual inputs): Reasoning + emotion → judgment
4. Judgment contains emotion: Judgment containing emotion ↔ reason

Consideration of emotion or reason as the source of morality is ancient, but the modern debate is significantly framed by David Hume (1711-1776) and Immanuel Kant (1724-1804). Hume argued that reason is a “slave to the passions” and morality is bound up in emotion somehow. By contrast, Kant argued that morals are derived mostly from reason, usually thought of as conscious thinking. By the time one gets to the end of Liao’s book, it is clear that what’s the best of the four models is open to debate. Nonetheless, the balance of what Liao and the other authors have to say tips things (i) somewhat in favor of Kant and the ‘moral judgment contains emotion’ model, and (ii) modestly against the reason and then judgment causes emotion model.

Referring to brain scan data, Liao observes that “Every single neuroimaging study of moral cognition that I know concurs on one point: moral judgments regularly engage brain structures that are associated with emotional processing.” Obviously that isn’t proof, but it is consistent with some significant role for emotion.

The data against the emotions are moral judgment outputs seems rather convincing. According to author Jesse Prinz (chapter 1): “Numerous studies have shown that induced emotions can influence our moral judgments. . . . . happiness increases positive moral judgments and anger brings them down. The pattern of emotional impact is highly specific. Different emotions have distinctive and predictable contributions.” That makes emotions look at least as much like a moral judgment input as an output. Prinz is a key proponent of the judgment contains emotion judgment model.

A recurring Moral Brains theme questions if judgments based on conscious reason are more reliable or ‘truth seeking’ than emotion-based ones. That is open to debate. An interesting observation is that psychoactive drugs can change moral judgments.

Other insights include fairly convincing arguments and some evidence that reason isn’t only a conscious mental process. Previously many philosophers and scientists believed that reason was largely conscious (> 95% ?), but that belief is in question.

An assertion in Liao’s book is this by Walter Sinnott-Armstrong (chapter 14): “One of the most important lessons from the first decade of research in moral neuroscience is that morality is not unified in the brain or anywhere else.” Sinnott-Armstrong points out that, (i) morality isn’t located (unified) in any specific part of the brain, (ii) morality isn’t unified by content, e.g., it’s not just being about what’s right and wrong, and (iii) morality isn’t unified by its function, e.g., it’s not just being about using customs and values to guide social conduct.

At this point, the reader might see a contradiction: Liao says emotion-related areas of the brain are involved, but Sinnott-Armstrong says there’s no unity in terms of brain location. There is no contradiction. Although emotion processing centers may often (always?) be involved, there’s more to it than that. Other areas are likely also involved, e.g., as in the judgment contains emotion model where reason also influences moral judgment. To get to that belief, just consider the factor of time. Yes, people often make snap moral judgments. However, when given some time for reason and/or intuition, even a few minutes, moral judgments sometimes drift or change completely.

The neuroscience of morality probably still has at least 2-3 decades of research ahead of it before some basic issues begin to resolve into at least modest clarity. Maybe the most fundamental unanswered question is whether empirical neuroscience can ever lead to normative conclusions about what’s right and wrong. That’s a tough question. Is there a philosopher in the house?

NOTE: From this reviewer’s point of view, politics is more a matter of intuition-emotion and personal morals and identity than fact and logic. Reading Liao’s book reinforces that belief. It provides a current, broad knowledge basis for it. People interested in politics who read this book will easily see direct relevance to real world politics and politicians.

'Generous Gift' hybrid

B&B orig: 5/15/17

Book review: Crystallizing Public Opinion



“It is manifestly impossible for either side in [a political] dispute to obtain a totally unbiased point of view as to the other side. . . . . The only difference between ‘propaganda’ and ‘education’, really, is in the point of view. The advocacy of what we believe in is education. The advocacy of what we don’t believe in is propaganda. . . . . Political, economic and moral judgments, as we have seen, are more often expressions of crowd psychology and herd reaction than the result of the calm exercise of judgment.” Edward Bernays, Crystallizing Public Opinion, 1923

“Intolerance is almost inevitably accompanied by a natural and true inability to comprehend or make allowance for opposite points of view. . . . We find here with significant uniformity what one psychologist has called ‘logic-proof compartments.’ The logic-proof compartment has always been with us.” Edward Bernays, Crystallizing Public Opinion, 1923

“The relativity of truth is the commonplace to any newspaperman, even to one who has never studied epistemology; and, if the phrase is permissible, truth is rather more relative in Washington than anywhere else. . . . . most of the news that comes out of Washington is necessarily rather vague, for it depends on assertions of statesman who are reluctant to be quoted by name, or even by description.” Edward Bernays, Crystallizing Public Opinion quoting Elmer Davis in his book, History of the New York Times, 1921

“The public and the press, or for that matter, the public and any force that modifies public opinion, interact. . . . . The truth is that while it appears to be forming public opinion on fundamental matters, the press is often conforming to it. . . . . Proof that the public and the institutions that make public opinion interact is shown in instances in which books were stifled because of popular disapproval at one time and then brought forward by popular demand at a later time when public opinion had altered. Religious and very early scientific works are among such books.” Edward Bernays, Crystallizing Public Opinion, 1923

Book review: Edward Bernays (1891-1995), nephew of Sigmund Freud, coined the term “public relations.” He advocated use of shrewd, sophisticated, science-based propaganda to both conform to and shape public opinion to sell products and ideas. Bernays arguably was among the 30 most influential but least well known Americans of the 20th century. He was instrumental in establishing public relations as a necessary component of commercial, political and other important interests in building acceptance of what the PR person’s client was selling.

Products Bernays helped sell in his lifetime ranged from consumer products, commercial ideas and a stage play designed to inform the public about a serious public health issue (syphilis) to coaxing Americans into a patriotic fervor about, and support for, entry into World War I. Consumer products he successfully sold included bacon, hair nets and silk. Commercial ideas he successfully sold included public support for private ownership of electric utilities and, against a prevailing public belief that jewelry was useless, public acceptance of the idea that jewelry was really valuable and desirable. One commentator credited Bernays with being a key influencer in the conversion of the American public’s mind set from one of needs-based, buy only what you need, to one of desires-based, buy what you want.



In coaxing the American public into accepting entry into World War I, Bernays worked for the U.S. Committee on Public Information, a federal government propaganda agency dedicated to building American public support for the war. Before then, Americans were skeptical about entering the war. After realizing how amazingly successful this propaganda effort was in changing public opinion in both the US and Britain, Bernays realized that since science-based propaganda could be used to sell political ideas, it should also work for consumer and commercial products and ideas.

Bernays was right.

In his 1923 book, Crystallizing Public Opinion, Bernays lays out his argument that propaganda and public relations were both critical and good in democratic governance. People who strongly shaped Bernays’ thinking included his uncle, Freud, social psychologist Wilfred Trotter who coined the term ‘logic-proof compartment’ and authored the 1916 book, The Instincts of the Herd in Peace and War, British political scientist and social psychologist Graham Wallas (Human Nature in Politics, 1908) and the reporter and political commentator Walter Lippmann (a socialist who invented the concept of ‘stereotype’ as it is now understood in modern psychology) who has been called the ‘Father of Modern Journalism’ by some commentators.[1]

Bernays professed to hold as a core concept the role of ethics in propaganda. Until the end of his life, he never felt that propaganda was a means to deceive, but instead was to inform or educate, thereby shaping public opinion. He never wavered in his belief that he was always on the side of good and right. Among other things, his later book Propaganda (1928) was his attempt to rehabilitate the term propaganda from synonymous with deceit and lies to its original meaning of educating. Ironically, Bernays’ work for the U.S. Committee on Public Information (CPI) was part of what helped lead the US public to think that propaganda meant deceit and lies. That meaning still prevails today.

In the introduction to Crystallizing Public Opinion by Stewart Ewen (2011), Ewan observes that “In many ways, the experiences of the First World War challenged many mainstream intellectuals’ faith in the possibility of direct democracy.[2] The propaganda efforts of the CPI reinforced a growing belief that ordinary men and women were incapable of rational thought. For democracy to work effectively, public opinion needed to be guided by what historian Robert Westbrook has characterized as ‘enlightened and responsible elites.’”

As Bernays alludes to in Crystallizing Public Opinion, basic definitions can be basically impossible to articulate. Thus, what’s an ‘enlightened and responsible elite’ to one person can easily be an uninformed and irresponsible dolt to another.

Nuts and bolts: Crystallizing Public Opinion is a short, easy to read book (155 pages). This book review is based on the edition with an excellent 30 page introduction by Stuart Ewen (2011). For anyone interested in politics and the science of politics, this book is highly recommended. It provides an outstanding history and context for modern American politics and commerce in the words of a key influencer.



Footnote:
1. Lipmann was pivotal in convincing president Wilson to establish the Committee on Public Information, which rejected the term propaganda. The CPI considered it's content to be educational and based on facts with no other argument involved. History has shown that self-delusion to be blatantly false. Lipmann worked with Bernays on the CPI.

2. It’s not clear if Ewen really means true direct democracy in the old Athens Greece sense or whether he refers to American indirect democracy.

B&B orig: 5/18/17

Book review: The Political Mind



CONTEXT: Dissident Politics advocates a pragmatic brand of politics that is focused on applying less biased versions of facts and logic in service to a competition of ideas-based vision of political morals and the public interest. The point was to see if it was possible to develop a plausible science-based ideology that is more rational and conscious reason-driven than existing ideologies. Conceptions of dominant American ideologies, e.g., liberalism, conservatism, socialism and capitalism, are based primarily on unconscious, reflexive and intuitive-emotional-moral perceptions of reality and thinking that distorts fact and logic. The pragmatic ideology concept arose mostly from personal observations of American politics and study of the biology of politics, mainly cognitive and social science research on politics and human cognition. Although the pragmatic ideology was internally consistent and logically defensible, cognitive and social science kept pointing to an astonishing weakness of objective fact and logic as (i) persuasive, and (ii) as a rational core for any political ideology. That disconnect prompted more study of the modern cognitive and social science of politics. The Political Mind was part of that effort.

BOOK REVIEW: Cognitive linguist George Lakoff wrote The Political Mind: A Cognitive Scientist’s Guide To Your Brain And It’s Politics, which published in 2008 and 2009 (Penguin Books, New York, NY). Lakoff’s central hypothesis argues that reliance on “Old Enlightenment” (OE) visions of conscious reason (fact- and logic-based) is detrimental in defense of democratic values.

Lakoff argues that OE incorrectly assumes that reason is, among other things, conscious, universal (same for everyone), logical (consistent), unemotional, self-interested and literal or disembodied where mind logic fits world logic. Instead, reason is unconscious and emotion-dependent, inconsistent, embodied and not universal. He argues that unconscious thought itself is reflexive (automatic and not consciously uncontrolled), while conscious thought is reflective (consciously uncontrolled).

Lakoff’s argument that disembodiment of reason seems to cast doubt on pure logic as a persuasive source of moral authority if one assumes that people’s cognitive biology cannot be overcome.

Lakoff is a staunch liberal. He sees the rise of conservative messaging and political influence as a direct and profound threat to American democratic values and the moral mission of government, which is protecting and empowering the public. According to Lakoff, “the radical conservative political and economic agenda is putting public resources and government functions into private hands, while eliminating the capacity of government to protect and empower the public. . . . The Old Enlightenment reason approach not only fails, it wastes effort, time and money.” In other words, facts alone are ineffective.

He goes on to explain: “Politics is about moral values. . . . . Most of what we understand in public discourse is not in the words themselves, but in the unconscious understanding that we bring to the words. . . . . our systems of concepts are used to make sense of what is said overtly. . . . . The very use of the left-to-right scale metaphor serves to empower conservatives and marginalize progressives. . . . The left-to-right scale metaphor is not harmless. It is politically manipulated to the disadvantage of American democratic ideals.”

There is no ambiguity about Lakoff’s politics. He explains at length the power of framing issues in progressive and conservative frames to influence progressive and conservative modes of thinking. His core argument is that when a progressive accepts a conservative frame of an issue, the progressive is at a disadvantage, or maybe even concedes the issue to the conservative point of view. Framing examples that Lakoff cites include viewing illegal immigration as a matter for conservatives of dealing with “illegal immigrants”, while it ought to be progressively framed as a matter of illegal employers and/or consumers. Similarly, health care isn’t a conservative matter of health care “insurance”, but instead it’s a progressive matter of government’s central moral role in protecting and empowering its citizens.[1]

Based on the science, Lakoff argues that American politics amounts to a competition for minds based on messaging to or leveraging two fundamentally different progressive and conservative moral modes of thinking. Those thought modes are based, among other things, on different sets of moral beliefs and personal social identity. The core progressive moral value is empathy and what flows logically from it. As applied to government, Lakoff argues that empathy underpins democratic values of protection and empowerment of citizens. His vision of the conservative view is that fact and logic play a far less important role than is the case for progressives. That implies, for whatever reasons, relatively more reliance on fact- and logic-based conscious reason leads to better politics and outcomes than less reliance.

Where progressives fail is in their failure to abandon OE conceptions of reason fact and logic and to embrace a New Enlightenment (NE) conception of reason that accounts for the cognitive biology of political and moral thought. Lakoff’s vision of NE holds that it is rational (conscious), embodied, emotional, empathetic, metaphorical and only partly universal. NE reason (1) incorporates emotion that’s structured by frames, metaphors, images and symbols, and (2) requires a new philosophy of morality and politics because the brain isn’t neutral or a general purpose computer. Human cognition is severely limited to what it can make sense of. Much of what is perceived is filtered through frames, metaphors and symbols to simplify the cognitive load of making a complex world fit into a specific personal understanding of the world. In short, everyone’s reality is different, in significant part because their morals are different.

Questions: Is Lakoff’s argument persuasive that “there are no moderates” and the only modes of political thinking that exist are either progressive or conservative for any given issue?* If that’s true, how can one account for the pragmatic, not progressive and not conservative mind set reflected by superforecasters that cognitive scientists have detected among a few otherwise normal people (maybe 0.1% to 0.01% of the adult human population)? Is B&B barking up the wrong tree by downplaying emotion and relying on the OE vision of reason, fact and logic, e.g., the evidence is that objective fact and logic are not effective persuaders? Should fact- and logic-based conscious reason in politics lead to better outcomes in the long run? If so, why, and if not, why not?

* Lakoff argues that people are rarely or never all progressive or conservative in thinking about all issues. For some issues progressive thinking dominates, while conservative thinking dominates for other issues.



Footnote:
1. Lakoff observes that about one-third of private health care cost is for profit and administration; Medicare spends 3% on administration and none on “profiteering”. He cites a short taped conversation between President Nixon and his aide John Ehrlichman regarding a new trend among health care insurers. The gist of the conversation:
Ehrlichman: Incentives favor less medical care; the less care they give, the more money they make.
Nixon: Fine.
E: The incentives run the right way.
N: (admiringly) Not bad.
Lakoff argues that here, Nixon was identifying with the conservative morals of individual responsibility (be prosperous) and making money any legal way, i.e., raising barriers to health care to increase profit. From that moral point of view, it was a great idea. The progressive moral of empathy and protection for consumers wasn’t part of the thinking. Lakoff argued that’s not a matter of callousness by Nixon, but instead it’s a matter of differing morals shaping unconscious thinking and beliefs.

B&B orig: 5/26/17