The 2021 book, An Introduction to the Cognitive Science of Religion: Connecting Evolution, Brain, Cognition, and Culture, is a college undergraduate textbook by Claire White. Chapter 8 is entitled Morality. of As the book’s title indicates, research on morality and religion is multidisciplinary. My main interest is the science of morality related to politics and what progress the field has made in the last ~25 years since I became aware of the high importance of morality.
From what I can tell, the field has not progressed much. What progress there is seems to be a modest increased understanding by some researchers at how little they know compared to what they used to think they knew. But that is not all bad. Dispelling misunderstandings is necessary for researchers to progress by reassessing their research and data.
Some points White makes about current beliefs among experts:
- Morality is believed to have arose from the human need to cooperate for survival, a human cognitive trait that is believed to predate religion. Children show signs of moral understanding and behavior at an early age, indicating that moral impulses are at least partly hard wired early on.
- White and experts are unable to define morality, which arguably (in my opinion) makes it an essentially contested concept: “Although scholars differ in how they conceptualize the term morality, it is used here in a broad sense too refer to standards or principles about right or wrong conduct. .... Scholars seem to circumvent the definitional problems of studying morality by investigating ‘prosociality’ to mean behavior that furthers the interests of a particular group. Yet scholars are seldom explicit about what, precisely, their use of the term prosociality designates. The lack of upfront conceptualizations is especially problematic because, depending on which definition is used, the same behavior can be labeled as prosocial or not. For instance, murder and even genocide can be viewed as prosocial according to the evolutionary conceptualization of the term because they [at least sometimes] facilitate success in intergroup competition.”
- Theists and non-theists show some overlap in moral beliefs, but religious social and moral influences sometimes makes theist morality manifest differently compared to non-theists. Specifically, theists tend to be less trustful of members of other religions, and even more distrustful of non-theists. Theists tend to direct their moral impulses and behaviors to members of their group. That sometimes leads to social division. By contrast, non-theists tend to apply their moral values universally to all people. Presumably the non-theist moral mindset and behavior tends to be less socially divisive, but White does not comment specifically on that point in Chapter 8.
White discusses one bit of possible progress, namely a view by some experts that morality is not a cognitive unitary whole, but instead moral judgments and beliefs are fragmented into different moral domains or values. Responses to those values are triggered by different things and weighed differently by different individuals. This line of thinking goes back to 2004 when Moral Foundations Theory (MFT) was initially proposed. MFT hypothesized the existence of several basic moral domains or values that constituted care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, sanctity/degradation, and later liberty/oppression.
Values of equality and proportionality were added to MFT in 2023. Different people displayed sensitivities to, or even degrees of rejection of, those moral values in their reasoning, beliefs and behaviors.
An overlapping variant of MFT that White calls Morality Fractionation has been proposed to broaden the MFT concept to includes lots of moral values or domains. MFT constitutes one specific implementation or example of the broader concept of morality fractionation. Both convey the general idea that distinct moral values, domains or mental modules exist.
My suspicion is that some or many individual moral values or domains at least partly overlap with other moral values or domains. If so, sensitivity to, or rejection of, one moral value may be able to influence a person's reaction to the triggering of a different value. That is a very messy state of affairs if it is real.
Germaine’s commentary & hubris
The cognitive science of morality still struggles with simply describing and understanding what morality is. As best I can tell, cognitive science is nowhere close to having any authoritative, or at least near-universal, prescriptive theory of how people ought to make moral judgments, much less what judgments should be under various circumstances. My suspicion is that this goal will turn out to be something that humans can never achieve. If nothing else, (1) essentially contested concepts, (2) differing personal and cultural norms, and (3) dark free speech, all stand squarely in the way of a universal moral theory.
There are a few semi-universal beliefs that White mentions, i.e., don’t murder or steal, but in my opinion that is infantile in its shallowness. It is inadequate, to put it mildly. My sense of White, maybe significantly reflecting the mindsets of experts generally, is that her mind is trapped by her academic circumstances and the cramped, constraining sociology and history of the cognitive science of morality. Assuming most experts want to improve and/or sustain a happy and peaceful human condition, I suspect that the human history of morality blinds experts to what the most important moral values actually are in modern societies. The experts keep looking back to the apes and young children. Instead they need to look at least as hard, probably a lot harder, at modern adults as individuals or in groups, tribes, nations and political mindsets. Yes, political mindsets. (I'll circle back to this assertion shortly)
Despite science’s limited knowledge and excruciatingly slow progress, I find that White’s overview of morality in chapter 8 is rather comforting. I strongly suspect that morality fractionation is on the right track and MFT is a part of the story, maybe a big part. But I admit to having serious bias in favor of the more recent morality fractionation hypothesis. What bias?
Two kinds of bias. First, the same bias that I asserted (without evidence) most experts have, i.e., a general desire to improve and/or sustain a happy and peaceful human condition. I bet if a survey was done, at least 95% of experts would agree with that assertion, larded with essentially contested concepts as it is. If that is true, and I bet it is, then there is no choice but to consider fundamentally different political mindsets as a central focus of the cognitive science of morality. What different political-moral mindsets? Pro-democracy and pro-authoritarian. Politics and morality cannot be separated. That is an inherent part of the human condition.
My second bias is that the moral foundations of pro-democracy and pro-authoritarianism are fundamentally different. Generally speaking, and as supported by human history, including the modern American MAGA movement, democracy exists and sustains itself only on the basis of reasonable acceptance of facts, true truths, sound reasoning and reasonable compromise (polluted with biases and moral judgments as those factors may be). All major forms of authoritarianism I am aware of, autocracy, plutocracy and theocracy, are opposed to or reject reasonable compromise, and facts, true truths, and sound reasoning, especially when they are inconvenient.
From that point of view, and in view of the morality fractionation hypothesis, there is good reason to think that (i) support for reasonable compromise, and (ii) fidelity to facts, true truths, and sound reasoning (less irrational) are all core moral political values, separately or overlapping.
Yeah, that is hubris, but I think is it basically correct.
My pragmatic rationalism anti-ideology ideology easily fits into the morality fractionation hypothesis. Fitting it into MFT is messier, but probably doable. It is possible that in time data from new research will collapse MFT and the morality fractionation hypothesis back into the earlier unitary morality hypothesis. But at present, that strikes me as unlikely.
For better or worse, we’re still awfully ignorant. But, we ought to have a better grasp of the cognitive science of morality in another 20-30 years. Of course, I said that in 2017 in my original review of S.M. Liao’s 2016 book,
Moral Brains: The Neuroscience of Morality. (
review reposted here in 2019).
Is it just me, or is the science of morality in a slowed time warp? . . . . Is morality science something humans can even coherently study? . . . . . grumble, grumble . . . . . . ๐คจ