Humans idolize, suffer, rejoice, kill and die in the name of morality. We are a hypersocial species -- nearly everything we do involves other people. So it's not surprising that the rules governing our interactions -- what we owe to one another and how we ought to treat transgressors -- occupy a prime spot in the human psyche. Benjamin Voyer and Tor Tarantola, editors, Moral Psychology: A Multidisciplinary Guide, page 1
For most of its history, philosophical moral psychology has been in bad shape. People were asking the right questions, but their methods were questionable; rampant speculation was revised in light of pure guesswork; guesswork had to be amended on the account of arbitrary superstition; superstition was corrected by flimsy moralizing, and the whole thing was rounded off by a healthy dose of wishful thinking. Philosophical theories of human nature had to state how human beings ought to be, rather than how they actually are. .... It is not a good idea, generally speaking, to speculate about the nature of the moral mind without systematically investigating how the mind works. Why philosophers failed to appreciate this rather obvious truth is something I can only speculate about myself. Hanno Sauer, Moral Psychology: A Multidisciplinary Guide, page 3
Context
Current hypotheses about how the human brain-mind works when dealing with politics posits that we do not do logic much, but instead we reason. Reasoning can include some logic, but is usually far more influenced by psychological and social factors including morals, beliefs, ideology, religion, self and tribe identity, innate and learned biases such as motivated reasoning, social context, sex, race, language, life experiences and so forth. Most reasoning is unconscious and the conscious mind defends what little we do become consciously aware of. Conscious reasoning did not evolve to find truth or logic. It evolved to defend what the unconscious mind believes. Reasoning is what can lead some people to believe the Earth is flat, climate science is a hoax, aliens control our minds and vaccines do not prevent or treat infections, but instead cause mental illness and disease.
Some researchers now believe that morals are central to people's political beliefs and behaviors. Emotions and feelings are deeply entwined with reasoning and they shape what we think we see and think, even when perceptions are false and thinking is flawed. For the most part, most humans are not mostly rational creatures. Most, maybe about 98%, are much more intuitive, emotional, moral and social creatures than mostly rational or logical ones. That is what we are from our evolutionary heritage.
If one accepts that as basically accurate, it raises the question of whether moral truths exist, what they are if they do exist, and how one can find them, assuming they can be found.
Chapter Review
The first chapter of the 2017 book,
Moral Psychology: A Multidisciplinary Guide, is entitled
Between Facts and Norms: Ethics and Empirical Moral Psychology, (edited by Benjamin Voyer and Tor Tarantola). Chapter 1 tries to pin down whether there are moral truths and what the state of modern science is in the quest for moral truth. Chapter 1 was written by Hanno Sauer, a philosopher at Utrecht University in the Netherlands.
This book was published a year after S. M Liao's excellent book,
Moral Brains: The Neuroscience of Morality, which I reviewed here. Like
Moral Brains,
Moral Psychology integrates philosophy with modern empirical science on morality and where we stand in terms of trying to do empirical science on human morality.
Also like
Moral Brains,
Moral Psychology is an academic book and not easy for a lay audience to understand. It is intended to breach the siloes that researchers in different disciplines tend to be stuck in, e.g., sociology, evolutionary biology, neuroscience, anthropology, psychology,
analytic philosophy, etc. This book attempts to familiarize scientists in different areas of research on morality about the progress and technical language that related disciplines routinely employ.
This line of research strikes me as one of the most complicated and subtle endeavors that humans can undertake. It still isn't clear whether humans will ever be able to figure morals out or find moral truths that are more or less universal. This chapter summarizes the approaches that scientists have employed to try to understand morals and moral thinking.
Sauer's chapter focuses on what he calls the gap. The gap is what separates philosophical accounts or theories of what moral judgment is and what empirical science understands we are. We just cannot get from what
is (what we are) to what
ought to be (what a true moral values says we should be).[1] Sauer concedes that some modern philosophers now believe that research on morality is doomed to fail because no one has ever been able to convincingly state how anyone can get from what we
are to what we
ought to be. Philosophers are skeptical that empirical evidence of what we are reveals moral truths.
Neuroscientist Sam Harris argued in his 2010 book,
The Moral Landscape: How Science Can Determine Human Values, that science will eventually be able to find universal moral values. Harris has been ferociously attacked for his belief and the thinness of the supporting data he relied on.
Due to the complexity of his brief (18 pages) but dense summary of where the science was as of 2017, I will try to explain just two points that Sauer raises. I hope these give a glimpse of just how complex and tricky research on human morality is.
Point 1 - free will
Research from the 1980s found that humans make decisions in laboratory experiments unconsciously before we become consciously aware that we made the decision. We operate under an illusion that we consciously make decisions at the instant we become aware of it. We decide things about 0.4 to about 10 seconds before conscious awareness. That research has been repeated and verified dozens of times. It is no longer questioned. That led many experts to conclude that humans have little or no free will, if one defines free will as something that the conscious mind controls.
The moral implications of that were huge. The argument is that since we lack free will, we cannot be morally responsible for our bad acts. Thus, punishments for crimes arguably are misplaced because criminals have no moral culpability for their bad acts. Empirical evidence shows that people who do not believe in free will are more aggressive and tend to cheat and lie more than other people.
Other research indicated that (1) we often do not understand why we make moral decisions, and (2) often make up reasons that in fact have no logic connection with the decision. That also supports the idea that we don't have much or any free will. Sauer comments that "people can have a sense of agency [moral control] when their agency couldn't possibly have made a difference and are more than happy to come up with reasons for their actions that couldn't possibly have a role in what they did."
Despite the empirical evidence that looks solid on its face, deeper thinking about this questions the 'no free will' interpretation of the data. The timing experiments only show when a decision was made, but not what the decision was. Also, the lab decisions were trivial, e.g., push a button or not. More consequential decisions are often accompanied by some conscious thinking about what to do before doing it. Some evidence supports the idea that in the time between an unconscious decision and a conscious action on it, the conscious mind can veto what the unconscious mind decided. Sauer says that we may have some conscious control, indicating that we have some free will: "An unfree will may not be so hard to swallow if we have at least a free unwill."
Point 2 - evolution
Some experts have argued that evolution dictates what is moral and what isn't. For the most part, we evolved to try to avoid pain, punish bad acts and cheaters, care for family, reciprocate favors and so forth. Therefore, those evolutionary traits define universal moral truths. That sounds reasonable. But is it?
Sauer says no: "Evolutionary pressures select for traits which are adaptive; but unlike in the nonmoral case, where false beliefs can get you killed, moral beliefs don't have to be true to allow you (and your genes) to survive." In other words, it would be pure chance if our moral beliefs coincided with objective moral truths. Evolution shapes how we make moral judgments, but moral judgments have no necessary connection to their objective truth. That's the gap again.
On top of that problem, there's modern society and technology to consider. Our moral mindsets evolved in very different times under very different conditions: "Our intuitive morality has been shaped to meet the demands of stable, intimate, small-scale tribal groups in the Pleistocene (starting about 2.6 million years ago and lasting until about 11,700 years ago). We are ill-equipped to deal with environments very unlike this one -- namely, the one we currently happen to live in."
Conclusion
This area of research is in its infancy. Researchers are just beginning to integrate information flowing from different disciplines into an understanding from which better informed theories of moral truth can flow. There is one point that Saure makes and I have been harping on for years. We both assert that it is necessary for people to have a better understanding of themselves and the human mind or condition. If we are self-aware, we can at least hope to tamp down some of the reality and reason distorting biases and heuristics[2] our minds use to simplify the wold into something we believe we understand, true, false, ambiguous or mixed. Sauer argues that we can do that if "we know how, why, when and under what conditions they operate." Those biases include our moral beliefs and a host of other psychological factors.
Sauer concludes with this:
"In fact, we have no way of knowing, in general, what causes our thoughts and desires, and our folk theories of how our thinking works are often hopelessly inadequate. Empirical research is essential for this reflexive purpose, and ignoring or dismissing it is reckless and foolish."
Footnote:
1. Candidates for moral truths include (1) not lying, not cheating and not stealing are good, (2) God says that X is good, or (3) that Y is good for society. Despite that, lots of people are liars, cheaters or thieves, and people cannot agree on what they believe God says is good or what is good for society. The property of being good cannot be reduced to other tangible properties or realities. An argument that discrimination against women and selfishness are good because women have always been discriminated against and we evolved to be selfish are not logically established due to the gap.
What we are does not define what
we should be.
2. This chart shows some of the biases and heuristics the human mind unconsciously uses to make the world understandable and acceptable. Those mental data processing operations tend to distort reality and reasoning whenever they contradict a person's mindset or beliefs, morals, self identity, tribe identity, etc. In essence, we distort reality and reasoning unconsciously and see a distorted reality that is better aligned with what our unconscious minds want to see.