Pragmatic politics focused on the public interest for those uncomfortable with America's two-party system and its way of doing politics. Considering the interface of politics with psychology, cognitive science, social behavior, morality and history.
Etiquette
DP Etiquette
First rule: Don't be a jackass.
Other rules: Do not attack or insult people you disagree with. Engage with facts, logic and beliefs. Out of respect for others, please provide some sources for the facts and truths you rely on if you are asked for that. If emotion is getting out of hand, get it back in hand. To limit dehumanizing people, don't call people or whole groups of people disrespectful names, e.g., stupid, dumb or liar. Insulting people is counterproductive to rational discussion. Insult makes people angry and defensive. All points of view are welcome, right, center, left and elsewhere. Just disagree, but don't be belligerent or reject inconvenient facts, truths or defensible reasoning.
Sunday, August 4, 2019
Michael Walzer on Inequality and Social Justice
Distributive Justice concerns the best way to allocate goods and services in a society or political community. Democracies, in principle, are egalitarian. But in what sense are all citizens entitled to goods and services on an equal basis? The first way of answering that question is called the principle of strict equality. Here it is held that citizens should have the same level of goods and services because they have equal moral value as members of the political group. The usual index for measuring this type of equality is income and wealth. Another measure of inequality is lack of equal opportunities (e.g. the opportunity to get a good education despite race or gender). But it is generally income and wealth disparity that are used as indicators of social and economic inequality, both in newspapers and political theories.
Michael Walzer, in his book, Spheres of Justice, doesn’t think disparity in income and wealth are, in and of themselves, the causes of social inequalities, and so he defines the goal of distributive justice not as strict equality, but what he calls complex equality. Walzer says there’s nothing wrong with some people being wealthier than others on the basis of competitive practices on the open market as long as the resulting income and wealth disparities are compatible with social justice. How can a capitalist market be made compatible with social justice? By making sure that the marketplace remains only one social and political sphere of goods in society among several others of equal importance. The question isn’t how to equalize (or nearly equalize) income and wealth, but rather how to render income and wealth inequalities harmless in terms of their affecting access to those goods our culture deems to be necessary to all members of the political community, i.e. what philosophers have often called the Common Good. He outlines 11 goods which include membership (e.g.citizenship), needs (e.g. security and health), education, kinship and political power. We will look at one or two to get an idea of how this is supposed to work. Those goods that can be left to the marketplace are called commodities (and services).
Drawing on history, Waltzer discusses the case of a railroad magnate, George Pullman, who built an entire town he named after himself, Pullman, Illinois. The town had factories, a library, medical facilities etc. Housing was not for sale but rented. All plant workers had options to live there. But Pullman was, essentially, the CEO of the town making all decisions except those concerning public education. In classical economics, property or ownership goes together with sovereignty. But a “Town” in the US of the 1880s (and still today, of course) was considered a public democratic entity like a democratic “Polis” or City-state in Ancient Greece, not a piece of property to be bought, traded and sold. As such, townships are defined as being beyond the reach of the marketplace. Indeed the Supreme Court ruled that Pullman had to divest all but his business properties. Towns must be organized on the basis of democratic principles in the US. Political power is not distributed on the basis of ownership, but merit as recognized in public elections. We don't end up with CEOs of Towns but elected Mayors. All the legal protections of the state must apply to the town. No one can just carve out a township in the likeness of a feudal fiefdom, because “towns” are culturally defined as being democratic structures here.They are plugged into the democratic political community with its shared values, meanings and norms.
Cultural definitions of the Common Good also change over time. In the US there has been increased sensitivity to the need for provisions to meet the needs and interests of all members of the political community. For example, there was a time when protection against the ravages of fire and other forms of natural devastation were not guaranteed by the state. If you wanted protection against fire you had to pay the fire brigade or else they might not put a fire out on your property. Similarly, police protection was minimal and those who could afford to do so often hired security guards with broad rights to use weapons to protect clients. Indeed, the shootings that occurred at more than one workers strike were carried out by private security forces such as Pinkerton. Our citizenship needs now includes the expectation of public fire departments and police departments. The law is presumed to be egalitarian in principle (if not always in practice). Law enforcement agencies and fire departments operate in a way that goes beyond the logic of the market: a way that addresses our needs as members of society. So fire and police departments have to be distributed without special considerations for the rich and powerful, in principle.
Public education emerged as a public good in the 19th century as well; its cultural meaning was changed from being a luxury to a necessity-- part of the Common Good. Walzer argues that today healthcare is defined culturally in much the same way that police,fire protection and public education were defined in the 19th century—as goods whose distribution should not be affected by the level of wealth or income any particular person or group has. The general principle of this “Complex Equality” (in which commodities are left to the market and culturally defined social goods must be distributed equally) is: “No social good X should be distributed to those possessing some other good Y for that reason (their possession of Y) and without regard to the meaning of X.” So if X is public education and Y is money, I should not expect to get education just because I have money, and for that reason alone. The same should hold for access to healthcare, decent education, clean air and freshwater and many other things that are rated in ways that transcend the logic of the market.
While there will be income and wealth disparities, these should not interfere with the logic of the community which is normative and transcends that of the market. In order to implement Complex Equality, it may be necessary to introduce progressive taxation, but it is not because such redistribution is intrinsically right or fair, but only because it subserves the ends of distributive justice.
But just as cultural meanings and norms have changed in ways that favor provisions of education and healthcare, couldn’t the norms swing in the other direction? Is shared meaning or presumed value consensus really a strong enough principle for insuring the common good in society? If Social Darwinism or Minimal State Libertarianism becomes fashionable in 10 years, and cultural meanings and norms change accordingly, then should we cease to provide equally high quality education, police protection etc.? Further, this culturally relative way of supporting social justice makes it hard to imagine what we say to foreign countries should their norms be undemocratic. Indeed Walzer rules out all authoritarian and totalitarian systems a priori, fully aware that on his own account they lack the cultural meaning systems required to address what we identify as gross inequality.
In a later book, Walzer will try to answer the critics who charge him with a deleterious form of cultural relativism. I will cover that in a follow-up post in the near future. For now, Walzer may at least have found a way to steer a middle course between Welfare-Statism and a situation where the logic of the market is extended to all spheres, even the ethical ones, thus making distributive justice problematic. He may also have steered a middle path between the unrealistic abstractions of much political philosophy and the view from the street. But you be the judge.
How to do Science???
Post by dcleve
This discussion is third in a series of three related discussions, the first is The Münchhausen Trilemma, and the second is A Missing Foundation to Reason?.
I think the question of what science and empiricism should consist of, and what alternate views are credible on this, is pretty important to me, and of interest to this board. I try dialog with other thinkers, and for me a crucial question is whether one can do science about, and/or use science conclusions, to derive metaphysical views. Can one use reasoning, and evidence, to evaluate what might or might not be true about the metaphysics of our world? Whether this is possible, or not, I think is a pretty significant question. I do this, and do so using both reasoning, and empiricism, and I consider this combined suite of tools to be metaphysical naturalism. The method I use for empiricism is the Indirect Realism which I consider common across all of science.
I undertook to investigate the credibility of the Indirect Realism model which I use, and what its possible alternatives are, and I will discuss what I think I discovered in my (possibly incomplete) research on how to do science. My focus tends to be primarily on dualism/physicalism, and will later extend to God/Spirit questions, as these are the major questions I am hoping to support the discussion of!
What I will try to describe is both a worldview, and a methodology. The methodology is hypothec-deductive empiricism. I consider this methodology to be widely generalized in human practice, and to be formalized in the sciences. There are several key features to this methodology:
• Observations generate preliminary “facts”
• Speculation about those “facts” and the postulation of possible/plausible explanations
• Recognition that the truth/falsity of those explanations is not directly determinable (provided they satisfy logic and coherence criteria), and that there can be many such explanations of potentially significantly different forms (theory is underdetermined by evidence) • Evaluation between possible explanations is done by testing, by making predictions with them, and seeing if those predictions work, or not.
• While an explanation may end up very, very very well supported, it is still not certain, and never will be.
• “Facts” could be primary data, but need not be, and generally are not. Most observations are heavily theory-laden, and hence almost all “facts” are actually just well-supported explanations.
• Facts and explanations therefore build upon each other, creating a complex multi-tier structure of assumptions, upon assumptions, upon assumptions, upon assumptions, many times over. Most questions of interest rely upon a extensive and often unidentifiably complex tier of these assumed explanations, which have proven highly reliable, but none of which are certain.
• Note that this makes most “facts” themselves fallible. Uncertainty and fallibilism are baked in to this methodology. This methodology is, and has to be, robust to accommodate error.
• This methodology is “foundationalist” in structure, but does not rely on unquestionable first principles or observations. It is compatible with reductionism and also with emergence, or other alternatives.
The worldview is:
• realism – there is an “out there”.
• It is indirect realism, in that we don’t have access to most of the reality of what is “out there” and must infer/postulate it, with our “explanations”.
• These explanations, as one applies successive processes of test, refutation, and revision, are gradually bringing us closer to a “true” understanding of what is “out there.”
• We do have limited direct access, and that access is our consciousness.
• Consciousness gives us knowledge of essential qualia, and provides us with a starting point for reasoning.
• Despite being “direct” these too are fallible inputs, and we often reason incorrectly, and can be mistaken about qualia.
• Direct access to our qualia, and our reasoning, is further complicated by a feature of human minds – that while consciousness is critical when learning a subject, once a problem is mastered consciously, we embed that mastery in unconscious processes, and are no longer aware of how we solve it. Our neurology is such that many of the models we initially develop consciously, shift to become embedded as unconscious brain functions. So our “perceptions”, of things like a spherical shape, functional objects like chairs, etc are not qualia, but are post-processed integrations of qualia data into higher level constructs. This post-processing is done by our unconscious, and experience of these model-constructed perceptions is often overlaid upon, or supplants, the actual experience of qualia. This confuses our ability to sort what is or isn’t “foundational” through introspection.
• This is true of multiple key complex “explanations” that our thinking relies upon. As toddlers we develop a theory of self, a theory of the world, a theory of other minds, a set of corrections to our intrinsic reasoning, and a language ability. All of these are networks of complex explanations and working models, which we may have been aware of developing while we were toddlers, but have since then been driven into our unconscious operating assumption set, and the memory of our constructing them is generally lost. This radically obscures our ability to recognize our qualia through introspection.
As initially articulated here, this model of the empirical methodology and of the world could lead to an underestimate of the complexity of thinking, and of the scientific process. Explanations do not just rely upon data, but they come in bundled collections of interlocking explanations, which often have to be evaluated as a collective, rather than each being evaluated individually. Those networks can also often be easily “tweaked’ with secondary assumptions that do not affect either predictive power, nor the recognizability of the core assumptions when a contrary observation is discovered such that the importance of “critical refuting tests” can be overstated.
Both of these weaknesses (looking at questions in to much isolation, and overstating what could be shown by critical tests)were present in Popper’s early articulation of many of these ideas. These weaknesses are correctable, and can be corrected with a variety of tweaks to “explanations” and “falsification”. Treating explanations as a collective, with the collective including many of both primary and secondary hypotheses is one method. Another is to treat explanations as a family of closely related views, not as distinct individuals. A third is to treat a family of approaches to explanations as a common “research programme”, and look not to explicit falsifications, but to exploratory utility in research, or cumulative explanatory failings as support or refutation. And tests can be treated as not individually definitive, but as contributors to either consilience, or of decoherence, as measures of goodness of an explanation family. These alternatives use somewhat different terms and methodologies to evaluate science claims, but follow variants of hypothetico-deductive/falsification based thinking.
There are a variety of significant implications, both for science, and for overall worldview, that the above construct supports.
One of the most noteworthy is the near-universality of science. Empiricism, after all, is not limited to science. We use empirical principles when learning most subjects – such as how to kick a ball, or how to fletch an arrow. We also use variants of the hypothetic/deductive/falsification process in most aspects of reasoning. Science is then embedded in a broader view of how we humans think and interact with the world in general, and is in a continuum with the rest of practical life. This leads to no SUBJECTS being excludable from science. One of the major questions in science has been the “boundary problem” and this POV does not delimit the boundaries of science by subject at all.
Instead, methodology is crucial. WITHIN any arbitrary subject field, one can make almost any kind of idea or explanation untestable, if one crafts it to be so. It is this lack of testability (and therefore refutability) of explanations that makes an idea non-scientific. And, given the universality of the basic methodology to all empiricism and reasoning – lack of testability then becomes a fundamental sin in any field. NO “irrefutable” explanation would then be valid in ANY empirical OR reasoning question, as testing and falsifiability are universal requirements for valid thinking in or outside science.
This was an issue that first drove Popper to try to define the demarcation – when he found both Freudian psychology, and Marxist historicism, to both claim to be scientific, but because they were irrefutable, he realized they could not be science, or valid in any way. Subsequent thinking about of both disciplines is that each COULD be articulated in ways that they ARE falsifiable, but that the advocates of both chose to instead craft them as unfalsifiably flexible claims. So once more, it is not the subject , or the type of hypothesis, that is non-scientific, but instead the way it is structured by its advocates. This point clarifies a significant issue relative to empiricism, and the need for falsifiability. Both Marxist and Freudian thinking claimed to do testing. BUT, their tests were all confirmations, as the way they structured their theories, there COULD only be confirmatory tests! They were referencing a Humean model of empiricism, where truth is established by confirmatory tests, and one gets the greatest “confirmations” with a explanation that is compatible with all outcomes – IE irrefutable. This reinforces Popper’s insight - fasifiability is needed for actual science. Confirmations invite confirmation bias, which in its purest form is unfalsifiable pseudoscience.
The boundary between science and craftsmanship, or science and philosophy is a fuzzy one. Science is characterized by formalization of the basic methodologies used in its field, and formalization is an analog process. And sciences have historically developed out of other larger fields, and the maturation of “formalism” for a particular science is an incremental development. Therefore, the boundary of science cannot be readily specified by field, or formalism, or maturity. The only well defined boundary is with pseudo science and irrefutability.
This model also has a bearing on a variety of significant metaphysical questions around the boundaries of science.
• A variety of claims have been made that science must assume physicalism. But in this model, physicalism is a complex explanation, which is built up from a network of assumptions, all of which were developed using the empirical process. IE physicalism is an EXPLANATION, and MUST be testable, if it is to be valid. And testing physicalism cannot presuppose physicalism, hence no – science CANNOT presuppose physicalism, and such a claim qualifies as the most rejected category of claim: pseudo science.
• Even further than the point above about how physicalism can not be assumed – this model is in several ways intrinsically contrary to physicalism. Physicalism after all is inferred from more fundamental information and models, and these model are by definition NOT physicalist. Hence, for physicalism to be “true” it has to be inferred from prior models which would then need to be re-interpreted, requiring its founding justifications to have either major correction or rejection as fundamentally mistaken, when seen in physicalist terms. The physicalist worldview then is in conflict with its own supporting rationale.
• There are strong opinions which have been expressed on both sides of the concept of studying (or deriving) morality or ethics as a science question. This view is supportive of the idea in concept, as all possible subjects are capable of becoming sciences. But the process of maturation and formalization of a field of study must proceed to a significant degree before a field can become a science. The question then becomes, not whether this COULD become a science – to that the answer is “yes”. But instead the question is whether the concepts, and methodologies of developing moral thinking are mature and settled to the degree of being a “science”? To which I consider the answer to be a pretty definitive “no”!
• There is a claim that studies of non-physical fields are NOT science, by definition from the first point above. This would include parapsychology, astrology, and non-traditional medicines. By the terms of this model of science, this claim is false. All of these COULD be legitimate subjects of science. To be so, they would need to be characterized in refutable terms, and the study of them advanced to the point that a methodology and concepts can be at least partially formalized. If those fields are not science today, they can be legitimately studied, with a goal of turning them into a science.
• There is a claim that Gods, spirits, or dualism cannot be studied by science. This claim is sometimes justified by the first claim above, that science must be physical, but is also sometimes justified by the claim that none of Gods, spirits, nor dualism can be characterized in a refutable fashion. This claim is also false, per the model I have outlined. These subjects are not intrinsically untestable. God claims CAN be testable – Epicurus specified one such test in the Problem of Evil. The issue, is that most advocates of God and spirit claims, have deliberately crafted them to be untestable. It is the advocates at fault here, not the basic ideas they are working with.
In evaluating alternatives to this view I have outlined, I tried looking at a variety of sources for plausible alternatives. One set of references provided me with critiques of realism. Here is a link the SEP’s “Challenges to Metaphysical Realism” https://plato.stanford.edu/entries/realism-sem-challenge/ . I also posted a question to Philosophy Stack Exchange about what the reasons were to question indirect realism, and the most useful answer pointed to Sellars’ “Empiricism and the Philosophy of Mind.”
I did not consider any of these references to provide a particularly useful alternative to realism. The SEP discussion is primarily semantically based, and I don’t see semantics arguments as nearly powerful enough to base any kind of metaphysics on. How, for example, could philosophers intuitions about whether Oscar on earth and Toscar on Twin-earth have the same or different knowledge of H2O-water or XYZ-water, provide the basis to draw ANY conclusion about reality or metaphysics? And while describing the Great Deceiver hypothesis in terms of “brain in a vat” is logically self-contradictory, one should not draw any broad conclusion from this, when one can just as easily describe it in terms of an “evil demon” which is not contradictory at all. And Sellars’ argument against foundationalism is similarly “semantic”, as it relies upon some linguistic forms being necessary for “knowledge” while others are insufficient. If one does not speak English, or accept his definitions, or accept his usage claim, then his conclusion that one cannot do foundationalism – is simply unsupported. Also – bizarrely -- both Sellars and Putnam are realists, yet they are the primary source material cited by anti-realists in these references!
My takeaway from these efforts to find a credible argument for anti-realism was – the only case of consequence was that made by poster Geoffrey Thomas in response to my Stack Exchange question, who argued that we cannot actually distinguish between indirect realism and instrumentalism.
I also reviewed the Internet Encyclopedia of Philosophy, which in its Object of Perception entry https://www.iep.utm.edu/perc-obj/ outlines a variety of alternatives to the indirect realism I described above, at least in understanding perception. These are: direct realism, phenomenalism, Intentionality or representationalism, disjuctivism, and cognitive externalism. This page also listed three main objections to indirect realism, which were:
Objections i and iii constitute examples of fallacious motivated reasoning. That one WANTS materialism to be true, and WANTS certainty and WANTS to reject radical skepticism as definitively refuted – are not actually valid reasons to reject indirect realism. Objection ii presupposes adverbialism, which I will discuss the weaknesses of later.
There were two things that struck me in particular in this SEP discussion. First – as a pragmatic empiricist, I would really like to see a discussion of whether, if one applies indirect realism to perception, whether one can make predictions and pass test cases with it. And I know of three such tests – each of which have shown dramatic success. If indirect realism is true, then babies should develop the processing capabilities to build up their internal models over time, while in direct realism this should be “direct”. And babies DO build up this capability over time, with clearly observed tiers of models! Also, one should be able to sometimes see base qualia, and sometimes the digested “perceptions” with complex concepts pre-embedded, under different circumstances. And we also see this. (As an aside – the inability of our introspection to clearly reveal qualia is often cited as an argument against indirect realism. But this is no objection, as this model predicts that qualia will be only occasionally perceivable, and they will comingled with perception of various tiers of post-processed models and concepts). And we should also be able to – once we start dissecting the neurology, see how our neural structure sorts and binds to arrive the interim processing states (face detection, smile/frown, etc) that build up the complex concepts that a well trained neurology also provides us instead of or in addition to qualia. And we have seen these neural segmentation and binning processes in action. These are EMPIRICAL SUCCCESSES, where modern neuroscience, perceptual psychology, and child development all support the indirect realism model. When one is discussing the validity of a model about how our world works, this should be critical information! Yet this discussion only mentions one of these in passing, and dismisses it as a “weak” argument for indirect realism!
Also the rhetoric of indirect realism not being “naturalized” strikes me in particular as silly. All that is meant, is that it is not easily integrated with materialism, which has nothing to do with methodological naturalism! As none of these alternate approaches to perception involve tossing out indirect realism in physics, chemistry or biology, indirect realism is a common feature of methodological naturalism as applied to every other field. Darwin did indirect realism to infer the theory of evolution, the valence theory of chemical bonding was developed under an indirect realism approach to valences, and the quark theory of subatomic particles is also an indirect realism theory. Science, and methodological naturalism, still would rely upon the indirect realism inference process, should one accept any of these alternative views of perception! So, what these alternatives call for is a SPECIAL PLEADING EXCEPTION for perception, from this otherwise universal methodological naturalism. IE these alternate theories of perception are NON-NATURALIZED, only the indirect realism model is naturalized! Basically, the entire reason these alternatives are being considered at all for perception, is because most of the adherents of the other views think materialism is refuted if one applies indirect realism to perception too!
Of the alternatives to indirect realism, direct realism is the most commonly held in the public at large. This view is often called common-sense realism. It holds that things like chairs, the redness of apples, and the beauty of a sunset, are really out there, and directly perceived by us. Common sense realism is in explicit conflict with science, where chairs are not a part of physics but instead are basically an invented purpose for objects that conveniently have a useful shape to sit on, apples are not red and this is easily illustrated by looking at an apple under low light or colored light, and the “beauty” aspect of a sunset is created by us and our psychological conception of beauty. In addition to science refuting direct realism, so does imagination. The stop sign in my minds eye right now – is perceived, yet it is not real. Perception is therefore NOT direct! I do not consider direct realism to be at all credible.
Meanwhile, this reference argues that direct realism provides no more certainty vs. radical skepticism than indirect realism does, AND that there are materialist variants of indirect realism (one of them from Sellars no less), so neither of the preferences motivating the rejection of indirect realism are even justified.
Phenomenalism, the idea that perceptions are all there is, and ALL physical objects are just mental artifacts, at least passes some of the first order refutation tests that direct realism fails. But when one tries to make predictions with it – IE that since we seem to experience a shared world, our phenomena should be shared – these predictions of phenomenal universality fail pretty much utterly. “Redness” is not perceived by the color blind, and the beauty of sunsets has a variable phenomenology. Phenomenalism was a popular view for decades, but it has almost disappeared due to failed predictions.
Adverbialism and intentionalism are both part of what I at least consider a grossly mistaken linguistic turn to philosophy in the middle of the 20th century, in which what one can say about the world, was considered, somehow, to define how the world really was. Language as a precondition for perception can be fairly easily refuted by test. Most perception is non-verbal – as you can easily illustrate. Look up from one’s screen, and just look around oneself. THEN, specify, in words, EVERYTHING one has just seen! How many pages of type will be required to describe even a single perspective from a stationary POV? And did you just perceive this? I hope you realize the answer is an unquestionable ”yes”! Yet how much of that scene translated to words in your head, while you were looking? 1%? Less? 0%!!!? A linguistically defined theory of perception – is simply wrong.
Disjuctivism premises that “real” perceptions are different from illusory perceptions (like my imagined stop sign), even if we cannot tell them apart. As such, it is an untestable assertion of dogma, and is pseudoscience, and simply wrong.
Of the alternatives to Indirect Realism, the one which has the most current vigor is cognitive externalism. There are a variety of very interesting ideas in cognition that are based on aggressive approaches to externalism, and a hint about this can be drawn from this SEP page. A link that I unfortunately lost, provides what I found to be a very useful understanding of what cognitive externalists were doing. It described them as starting philosophy “in the middle” with humans interacting with the world. Humans, world, interaction, and cognition become starting premises from this “middle” point, and one can then build up to a theory of science, and down to a fundamental metaphysics from there. But because of the extended nature of that middle-state interaction, this approach does not neglect the weak boundaries of human, world, cognition etc, unlike the way foundationalist or reductionist thinking does.
I applaud the insights that extended cognition thinking offers, about how many of our mental artifacts, such as language, are highly interactive, and agree with its general anti-reductionist attitudes. However, the “start in the middle” approach appears once more to be based on motivated reasoning. Starting from the bottom – tends toward dualism (or reductive physicalism, or idealist phenomenalism). Starting from the middle – IF one specifies the middle entities one starts with “properly” – can lead to extended non-reductive physicalism, as the extended cognition advocates mostly do. But if rather than “humans, cognition, world, and interaction”, one started with “world, self, other minds, reasoning, and interaction” as one’s initial pragmatic realities, then one would plausibly instead arrive at non-reductive InteractiveTriplism, rather than non-reductive materialism. So while extended cognition leads to some useful insights about cognition itself, and strengthens the case for non-reductive worldviews, the approach used to argue for it once more looks like an unjustified “wish it were so” effort to find a special pleading rationalization to defend physicalism.
In addition to the online encyclopedias, I also reviewed two recent highly thought of introductory books on philosophy of science, to see if they offered some additional insights into the sorts of questions I am interested in on metaphysics, and how to infer what might be real using reasoning and evidence. These were Samir Okasha’s Philosophy of Science, a Very Short Introduction, and Peter Godfrey-Smith’s Theory and Reality. Okasha is much shorter, is a quicker read than Godfrey-Smith, and gives a good overview of the history and current debates in philosophy of science. Godfrey-Smith covers more subjects, and in more depth, and is also very clearly written.
Neither book proved to be of much use in clarifying the issues I have been trying to address here. Okasha DID have one of his chapters dedicated to realism/anti-realism. But that chapter was focused on realism vs. instrumentalism – and both are consistent with the hypothetico-deductive falsificationism I have outlined here – they just differ in the degree to which one can apply trust to the conclusions drawn. Godfrey-Smith, in his longer work, dedicated multiple chapters to realism and a variety of alternatives. However, he was more of an advocate than Okasha, and the POV he was advocating was a version of realism in which he supported all of: common sense realism, scientific realism, empiricism, methodological naturalism, and metaphysical naturalism. In the process, he papered over the conflicts between these views – and as the details of these conflicts are particularly significant for the issues I am interested in, his otherwise excellent reference was not of much use in this project.
However, I can use these identified shortcomings in Godfrey-Smith to elaborate on these issues myself, in ways that may help highlight issues of interest.
I mentioned earlier the conflict between scientific realism and common sense realism. Common sense realism holds that our medium scale knowledge and experiences reflect a “real” world. This is a world in which apples exist, and are red, and solid and wet things are shiny, one can be virtuous, or morally flawed, and someone can be a friend, and calendars specify time. Scientific realism is often presented as physical reductionism – in which apples are just an assemblage of elementary particles, mostly filled with void and fields in between, and color is just an ascribed sensation to detecting a certain wavelength of reflected photons, and shininess is a way that coherently reflected light has a sharp angular peak and rapid drop off, and abstract objects like virtues and morality do not exist, and consciousness and any emotive feelings we have are an irrelevant byproduct of chemical reactions. Science need not be reductionist, and the dismissals of morality, emotions and abstract objects could be taken out of the above list, which would decrease the mismatch between common sense and scientific realism, but still leave a huge gap. I consider science to be committed to the non-reductive science realism I just described, which requires tossing much of common sense realism.
A reference which gets into this issue indirectly is Physicalism by Daniel Stoljar. Stoljar devotes most of this book to trying out, then refuting as inappropriate, one definition of physicalism after another. Ultimately, Stoljar rejects physicalism as an inappropriate remnant view left over from the 19th century, when the matter we knew was all similar to the macro-scale objects we manipulate in everyday life. Physicalism is best understood as a thesis that all matter is like macro solid matter. He concludes in his final chapter that physics has revealed the world to work so differently from our macro scale intuitions that physicalism is simply false, and the efforts to preserve the doctrine by recasting it to include the bizarre discoveries of modern physics have robbed the term of any useful content. This claim may not be a consensus view relative to physicalism, but if one substitutes “common sense realism” in the place of physicalism in this summary – there would likely be few philosophers or physicists who would argue with it. Commonsense realism is falsified by modern physics – and Godfrey-Smith’s efforts to paper this over do a disservice to this question.
The other two conflicts of note are between metaphysical naturalism – which is basically the assertion of physicalism as a presupposition, and both methodological naturalism and empiricism. Empiricism starts with observations – which foundationally, are our qualia. But the foundationalist empirical methodology, when applied to perception, tends to strongly support a dualist view – contrary to physicalism. And the presupposition of a particular metaphysical conclusion entails an unfalsifiable approach to doing science relative to metaphysics, which would make metaphysical naturalism a pseudoscience view, per the boundary definitions of methodological naturalism.
Based on my understanding of these points, one can be a scientific realist, an empiricist, and a methodological naturalist, with no conflicts. But if one accepts these POVs then one cannot be a common sense realist, nor may one ASSUME metaphysical naturalism. Physicalism could be a CONCLUSION from an investigation, but it cannot be a starting presumption.
What do you all think? Have I spelled out a consensus view of how science can and should work, and have I thoroughly and fairly explored the alternatives?
Well, this is 9.5 page OP – I hope you folks brought your reading glasses today!
B&B orig: 7/30/19
This discussion is third in a series of three related discussions, the first is The Münchhausen Trilemma, and the second is A Missing Foundation to Reason?.
I think the question of what science and empiricism should consist of, and what alternate views are credible on this, is pretty important to me, and of interest to this board. I try dialog with other thinkers, and for me a crucial question is whether one can do science about, and/or use science conclusions, to derive metaphysical views. Can one use reasoning, and evidence, to evaluate what might or might not be true about the metaphysics of our world? Whether this is possible, or not, I think is a pretty significant question. I do this, and do so using both reasoning, and empiricism, and I consider this combined suite of tools to be metaphysical naturalism. The method I use for empiricism is the Indirect Realism which I consider common across all of science.
I undertook to investigate the credibility of the Indirect Realism model which I use, and what its possible alternatives are, and I will discuss what I think I discovered in my (possibly incomplete) research on how to do science. My focus tends to be primarily on dualism/physicalism, and will later extend to God/Spirit questions, as these are the major questions I am hoping to support the discussion of!
What I will try to describe is both a worldview, and a methodology. The methodology is hypothec-deductive empiricism. I consider this methodology to be widely generalized in human practice, and to be formalized in the sciences. There are several key features to this methodology:
• Observations generate preliminary “facts”
• Speculation about those “facts” and the postulation of possible/plausible explanations
• Recognition that the truth/falsity of those explanations is not directly determinable (provided they satisfy logic and coherence criteria), and that there can be many such explanations of potentially significantly different forms (theory is underdetermined by evidence) • Evaluation between possible explanations is done by testing, by making predictions with them, and seeing if those predictions work, or not.
• While an explanation may end up very, very very well supported, it is still not certain, and never will be.
• “Facts” could be primary data, but need not be, and generally are not. Most observations are heavily theory-laden, and hence almost all “facts” are actually just well-supported explanations.
• Facts and explanations therefore build upon each other, creating a complex multi-tier structure of assumptions, upon assumptions, upon assumptions, upon assumptions, many times over. Most questions of interest rely upon a extensive and often unidentifiably complex tier of these assumed explanations, which have proven highly reliable, but none of which are certain.
• Note that this makes most “facts” themselves fallible. Uncertainty and fallibilism are baked in to this methodology. This methodology is, and has to be, robust to accommodate error.
• This methodology is “foundationalist” in structure, but does not rely on unquestionable first principles or observations. It is compatible with reductionism and also with emergence, or other alternatives.
The worldview is:
• realism – there is an “out there”.
• It is indirect realism, in that we don’t have access to most of the reality of what is “out there” and must infer/postulate it, with our “explanations”.
• These explanations, as one applies successive processes of test, refutation, and revision, are gradually bringing us closer to a “true” understanding of what is “out there.”
• We do have limited direct access, and that access is our consciousness.
• Consciousness gives us knowledge of essential qualia, and provides us with a starting point for reasoning.
• Despite being “direct” these too are fallible inputs, and we often reason incorrectly, and can be mistaken about qualia.
• Direct access to our qualia, and our reasoning, is further complicated by a feature of human minds – that while consciousness is critical when learning a subject, once a problem is mastered consciously, we embed that mastery in unconscious processes, and are no longer aware of how we solve it. Our neurology is such that many of the models we initially develop consciously, shift to become embedded as unconscious brain functions. So our “perceptions”, of things like a spherical shape, functional objects like chairs, etc are not qualia, but are post-processed integrations of qualia data into higher level constructs. This post-processing is done by our unconscious, and experience of these model-constructed perceptions is often overlaid upon, or supplants, the actual experience of qualia. This confuses our ability to sort what is or isn’t “foundational” through introspection.
• This is true of multiple key complex “explanations” that our thinking relies upon. As toddlers we develop a theory of self, a theory of the world, a theory of other minds, a set of corrections to our intrinsic reasoning, and a language ability. All of these are networks of complex explanations and working models, which we may have been aware of developing while we were toddlers, but have since then been driven into our unconscious operating assumption set, and the memory of our constructing them is generally lost. This radically obscures our ability to recognize our qualia through introspection.
As initially articulated here, this model of the empirical methodology and of the world could lead to an underestimate of the complexity of thinking, and of the scientific process. Explanations do not just rely upon data, but they come in bundled collections of interlocking explanations, which often have to be evaluated as a collective, rather than each being evaluated individually. Those networks can also often be easily “tweaked’ with secondary assumptions that do not affect either predictive power, nor the recognizability of the core assumptions when a contrary observation is discovered such that the importance of “critical refuting tests” can be overstated.
Both of these weaknesses (looking at questions in to much isolation, and overstating what could be shown by critical tests)were present in Popper’s early articulation of many of these ideas. These weaknesses are correctable, and can be corrected with a variety of tweaks to “explanations” and “falsification”. Treating explanations as a collective, with the collective including many of both primary and secondary hypotheses is one method. Another is to treat explanations as a family of closely related views, not as distinct individuals. A third is to treat a family of approaches to explanations as a common “research programme”, and look not to explicit falsifications, but to exploratory utility in research, or cumulative explanatory failings as support or refutation. And tests can be treated as not individually definitive, but as contributors to either consilience, or of decoherence, as measures of goodness of an explanation family. These alternatives use somewhat different terms and methodologies to evaluate science claims, but follow variants of hypothetico-deductive/falsification based thinking.
There are a variety of significant implications, both for science, and for overall worldview, that the above construct supports.
One of the most noteworthy is the near-universality of science. Empiricism, after all, is not limited to science. We use empirical principles when learning most subjects – such as how to kick a ball, or how to fletch an arrow. We also use variants of the hypothetic/deductive/falsification process in most aspects of reasoning. Science is then embedded in a broader view of how we humans think and interact with the world in general, and is in a continuum with the rest of practical life. This leads to no SUBJECTS being excludable from science. One of the major questions in science has been the “boundary problem” and this POV does not delimit the boundaries of science by subject at all.
Instead, methodology is crucial. WITHIN any arbitrary subject field, one can make almost any kind of idea or explanation untestable, if one crafts it to be so. It is this lack of testability (and therefore refutability) of explanations that makes an idea non-scientific. And, given the universality of the basic methodology to all empiricism and reasoning – lack of testability then becomes a fundamental sin in any field. NO “irrefutable” explanation would then be valid in ANY empirical OR reasoning question, as testing and falsifiability are universal requirements for valid thinking in or outside science.
This was an issue that first drove Popper to try to define the demarcation – when he found both Freudian psychology, and Marxist historicism, to both claim to be scientific, but because they were irrefutable, he realized they could not be science, or valid in any way. Subsequent thinking about of both disciplines is that each COULD be articulated in ways that they ARE falsifiable, but that the advocates of both chose to instead craft them as unfalsifiably flexible claims. So once more, it is not the subject , or the type of hypothesis, that is non-scientific, but instead the way it is structured by its advocates. This point clarifies a significant issue relative to empiricism, and the need for falsifiability. Both Marxist and Freudian thinking claimed to do testing. BUT, their tests were all confirmations, as the way they structured their theories, there COULD only be confirmatory tests! They were referencing a Humean model of empiricism, where truth is established by confirmatory tests, and one gets the greatest “confirmations” with a explanation that is compatible with all outcomes – IE irrefutable. This reinforces Popper’s insight - fasifiability is needed for actual science. Confirmations invite confirmation bias, which in its purest form is unfalsifiable pseudoscience.
The boundary between science and craftsmanship, or science and philosophy is a fuzzy one. Science is characterized by formalization of the basic methodologies used in its field, and formalization is an analog process. And sciences have historically developed out of other larger fields, and the maturation of “formalism” for a particular science is an incremental development. Therefore, the boundary of science cannot be readily specified by field, or formalism, or maturity. The only well defined boundary is with pseudo science and irrefutability.
This model also has a bearing on a variety of significant metaphysical questions around the boundaries of science.
• A variety of claims have been made that science must assume physicalism. But in this model, physicalism is a complex explanation, which is built up from a network of assumptions, all of which were developed using the empirical process. IE physicalism is an EXPLANATION, and MUST be testable, if it is to be valid. And testing physicalism cannot presuppose physicalism, hence no – science CANNOT presuppose physicalism, and such a claim qualifies as the most rejected category of claim: pseudo science.
• Even further than the point above about how physicalism can not be assumed – this model is in several ways intrinsically contrary to physicalism. Physicalism after all is inferred from more fundamental information and models, and these model are by definition NOT physicalist. Hence, for physicalism to be “true” it has to be inferred from prior models which would then need to be re-interpreted, requiring its founding justifications to have either major correction or rejection as fundamentally mistaken, when seen in physicalist terms. The physicalist worldview then is in conflict with its own supporting rationale.
• There are strong opinions which have been expressed on both sides of the concept of studying (or deriving) morality or ethics as a science question. This view is supportive of the idea in concept, as all possible subjects are capable of becoming sciences. But the process of maturation and formalization of a field of study must proceed to a significant degree before a field can become a science. The question then becomes, not whether this COULD become a science – to that the answer is “yes”. But instead the question is whether the concepts, and methodologies of developing moral thinking are mature and settled to the degree of being a “science”? To which I consider the answer to be a pretty definitive “no”!
• There is a claim that studies of non-physical fields are NOT science, by definition from the first point above. This would include parapsychology, astrology, and non-traditional medicines. By the terms of this model of science, this claim is false. All of these COULD be legitimate subjects of science. To be so, they would need to be characterized in refutable terms, and the study of them advanced to the point that a methodology and concepts can be at least partially formalized. If those fields are not science today, they can be legitimately studied, with a goal of turning them into a science.
• There is a claim that Gods, spirits, or dualism cannot be studied by science. This claim is sometimes justified by the first claim above, that science must be physical, but is also sometimes justified by the claim that none of Gods, spirits, nor dualism can be characterized in a refutable fashion. This claim is also false, per the model I have outlined. These subjects are not intrinsically untestable. God claims CAN be testable – Epicurus specified one such test in the Problem of Evil. The issue, is that most advocates of God and spirit claims, have deliberately crafted them to be untestable. It is the advocates at fault here, not the basic ideas they are working with.
In evaluating alternatives to this view I have outlined, I tried looking at a variety of sources for plausible alternatives. One set of references provided me with critiques of realism. Here is a link the SEP’s “Challenges to Metaphysical Realism” https://plato.stanford.edu/entries/realism-sem-challenge/ . I also posted a question to Philosophy Stack Exchange about what the reasons were to question indirect realism, and the most useful answer pointed to Sellars’ “Empiricism and the Philosophy of Mind.”
I did not consider any of these references to provide a particularly useful alternative to realism. The SEP discussion is primarily semantically based, and I don’t see semantics arguments as nearly powerful enough to base any kind of metaphysics on. How, for example, could philosophers intuitions about whether Oscar on earth and Toscar on Twin-earth have the same or different knowledge of H2O-water or XYZ-water, provide the basis to draw ANY conclusion about reality or metaphysics? And while describing the Great Deceiver hypothesis in terms of “brain in a vat” is logically self-contradictory, one should not draw any broad conclusion from this, when one can just as easily describe it in terms of an “evil demon” which is not contradictory at all. And Sellars’ argument against foundationalism is similarly “semantic”, as it relies upon some linguistic forms being necessary for “knowledge” while others are insufficient. If one does not speak English, or accept his definitions, or accept his usage claim, then his conclusion that one cannot do foundationalism – is simply unsupported. Also – bizarrely -- both Sellars and Putnam are realists, yet they are the primary source material cited by anti-realists in these references!
My takeaway from these efforts to find a credible argument for anti-realism was – the only case of consequence was that made by poster Geoffrey Thomas in response to my Stack Exchange question, who argued that we cannot actually distinguish between indirect realism and instrumentalism.
I also reviewed the Internet Encyclopedia of Philosophy, which in its Object of Perception entry https://www.iep.utm.edu/perc-obj/ outlines a variety of alternatives to the indirect realism I described above, at least in understanding perception. These are: direct realism, phenomenalism, Intentionality or representationalism, disjuctivism, and cognitive externalism. This page also listed three main objections to indirect realism, which were:
i. Dualism Many see a problem with respect to the metaphysics of sense data. Sense data are seen as inner objects, objects that among other things are colored. Such entities, however, are incompatible with a materialist view of the mind. When I look at the coffee cup there is not a material candidate for the yellow object at which I am looking. Crudely: there is nothing in the brain that is yellow. Sense data, then, do not seem to be acceptable on a materialist account of the mind, and thus, the yellow object that I am now perceiving must be located not in the material world but in the immaterial mind. Indirect realism is committed to a dualist picture within which there is an ontology of non-physical objects alongside that of the physical.
ii. Adverbialism Some see the argument from illusion as begging the question. It is simply assumed, without argument, that in the non-veridical case I am aware of some thing that has the property that the stick appears to me to have. It is assumed that some object must be bent. One can, however, reject this assumption: I only seem to see a bent pencil; there is nothing there in the world or in my mind that is actually bent. Only if you already countenance such entities as sense data will you take the step from something appears F to you to there is an object that really is F. Such an objection to indirect realism is forwarded by adverbialists. We can illustrate their claim by turning to other everyday linguistic constructions, examples in which such ontological assumptions are not made. “David Beckham has a beautiful free kick” does not imply that he is the possessor of a certain kind of object -- a kick -- something that he could perhaps give away or sell in the way that he can his beautiful car. Rather, we take this to mean that he takes free kicks beautifully. When one gives a mean-eye, one looks meanly at somebody else; one does not offer them an actual eye of some kind. Similarly, then, when one perceives yellow one is sensing in a yellow manner, or yellowly. Our perception should be described in terms of adverbial modifications of the various verbs characteristic of perception, rather than in terms of objects to which our perceptual acts are directed.
iii. The Veil of Perception Indirect realism invokes the veil of perception. All we actually perceive is the veil that covers the world, a veil that consists of our sense data. What, then, justifies our belief that there is a world beyond that veil? In drawing the focus of our perception away from the world and onto inner items, we are threatened by wholesale skepticism. Since we can only directly perceive our sense data, all our beliefs about the external world beyond may be false. There may not actually be any coffee cups or olive oil tins in the world, merely sense data in my mind. However, for this to be a strong objection to indirect realism, it would have to be the case that direct realism was in a better position with respect to skepticism, but it is not clear that this is so. The direct realist does not claim that his perceptions are immune to error, simply that when one correctly perceives the world, one does so directly and not via an intermediary. Thus, things may not always be the way that they appear to be, and therefore, there is (arguably) room for the sceptic to question one-by-one the veracity of all our perceptual beliefs.
Objections i and iii constitute examples of fallacious motivated reasoning. That one WANTS materialism to be true, and WANTS certainty and WANTS to reject radical skepticism as definitively refuted – are not actually valid reasons to reject indirect realism. Objection ii presupposes adverbialism, which I will discuss the weaknesses of later.
There were two things that struck me in particular in this SEP discussion. First – as a pragmatic empiricist, I would really like to see a discussion of whether, if one applies indirect realism to perception, whether one can make predictions and pass test cases with it. And I know of three such tests – each of which have shown dramatic success. If indirect realism is true, then babies should develop the processing capabilities to build up their internal models over time, while in direct realism this should be “direct”. And babies DO build up this capability over time, with clearly observed tiers of models! Also, one should be able to sometimes see base qualia, and sometimes the digested “perceptions” with complex concepts pre-embedded, under different circumstances. And we also see this. (As an aside – the inability of our introspection to clearly reveal qualia is often cited as an argument against indirect realism. But this is no objection, as this model predicts that qualia will be only occasionally perceivable, and they will comingled with perception of various tiers of post-processed models and concepts). And we should also be able to – once we start dissecting the neurology, see how our neural structure sorts and binds to arrive the interim processing states (face detection, smile/frown, etc) that build up the complex concepts that a well trained neurology also provides us instead of or in addition to qualia. And we have seen these neural segmentation and binning processes in action. These are EMPIRICAL SUCCCESSES, where modern neuroscience, perceptual psychology, and child development all support the indirect realism model. When one is discussing the validity of a model about how our world works, this should be critical information! Yet this discussion only mentions one of these in passing, and dismisses it as a “weak” argument for indirect realism!
Also the rhetoric of indirect realism not being “naturalized” strikes me in particular as silly. All that is meant, is that it is not easily integrated with materialism, which has nothing to do with methodological naturalism! As none of these alternate approaches to perception involve tossing out indirect realism in physics, chemistry or biology, indirect realism is a common feature of methodological naturalism as applied to every other field. Darwin did indirect realism to infer the theory of evolution, the valence theory of chemical bonding was developed under an indirect realism approach to valences, and the quark theory of subatomic particles is also an indirect realism theory. Science, and methodological naturalism, still would rely upon the indirect realism inference process, should one accept any of these alternative views of perception! So, what these alternatives call for is a SPECIAL PLEADING EXCEPTION for perception, from this otherwise universal methodological naturalism. IE these alternate theories of perception are NON-NATURALIZED, only the indirect realism model is naturalized! Basically, the entire reason these alternatives are being considered at all for perception, is because most of the adherents of the other views think materialism is refuted if one applies indirect realism to perception too!
Of the alternatives to indirect realism, direct realism is the most commonly held in the public at large. This view is often called common-sense realism. It holds that things like chairs, the redness of apples, and the beauty of a sunset, are really out there, and directly perceived by us. Common sense realism is in explicit conflict with science, where chairs are not a part of physics but instead are basically an invented purpose for objects that conveniently have a useful shape to sit on, apples are not red and this is easily illustrated by looking at an apple under low light or colored light, and the “beauty” aspect of a sunset is created by us and our psychological conception of beauty. In addition to science refuting direct realism, so does imagination. The stop sign in my minds eye right now – is perceived, yet it is not real. Perception is therefore NOT direct! I do not consider direct realism to be at all credible.
Meanwhile, this reference argues that direct realism provides no more certainty vs. radical skepticism than indirect realism does, AND that there are materialist variants of indirect realism (one of them from Sellars no less), so neither of the preferences motivating the rejection of indirect realism are even justified.
Phenomenalism, the idea that perceptions are all there is, and ALL physical objects are just mental artifacts, at least passes some of the first order refutation tests that direct realism fails. But when one tries to make predictions with it – IE that since we seem to experience a shared world, our phenomena should be shared – these predictions of phenomenal universality fail pretty much utterly. “Redness” is not perceived by the color blind, and the beauty of sunsets has a variable phenomenology. Phenomenalism was a popular view for decades, but it has almost disappeared due to failed predictions.
Adverbialism and intentionalism are both part of what I at least consider a grossly mistaken linguistic turn to philosophy in the middle of the 20th century, in which what one can say about the world, was considered, somehow, to define how the world really was. Language as a precondition for perception can be fairly easily refuted by test. Most perception is non-verbal – as you can easily illustrate. Look up from one’s screen, and just look around oneself. THEN, specify, in words, EVERYTHING one has just seen! How many pages of type will be required to describe even a single perspective from a stationary POV? And did you just perceive this? I hope you realize the answer is an unquestionable ”yes”! Yet how much of that scene translated to words in your head, while you were looking? 1%? Less? 0%!!!? A linguistically defined theory of perception – is simply wrong.
Disjuctivism premises that “real” perceptions are different from illusory perceptions (like my imagined stop sign), even if we cannot tell them apart. As such, it is an untestable assertion of dogma, and is pseudoscience, and simply wrong.
Of the alternatives to Indirect Realism, the one which has the most current vigor is cognitive externalism. There are a variety of very interesting ideas in cognition that are based on aggressive approaches to externalism, and a hint about this can be drawn from this SEP page. A link that I unfortunately lost, provides what I found to be a very useful understanding of what cognitive externalists were doing. It described them as starting philosophy “in the middle” with humans interacting with the world. Humans, world, interaction, and cognition become starting premises from this “middle” point, and one can then build up to a theory of science, and down to a fundamental metaphysics from there. But because of the extended nature of that middle-state interaction, this approach does not neglect the weak boundaries of human, world, cognition etc, unlike the way foundationalist or reductionist thinking does.
I applaud the insights that extended cognition thinking offers, about how many of our mental artifacts, such as language, are highly interactive, and agree with its general anti-reductionist attitudes. However, the “start in the middle” approach appears once more to be based on motivated reasoning. Starting from the bottom – tends toward dualism (or reductive physicalism, or idealist phenomenalism). Starting from the middle – IF one specifies the middle entities one starts with “properly” – can lead to extended non-reductive physicalism, as the extended cognition advocates mostly do. But if rather than “humans, cognition, world, and interaction”, one started with “world, self, other minds, reasoning, and interaction” as one’s initial pragmatic realities, then one would plausibly instead arrive at non-reductive InteractiveTriplism, rather than non-reductive materialism. So while extended cognition leads to some useful insights about cognition itself, and strengthens the case for non-reductive worldviews, the approach used to argue for it once more looks like an unjustified “wish it were so” effort to find a special pleading rationalization to defend physicalism.
In addition to the online encyclopedias, I also reviewed two recent highly thought of introductory books on philosophy of science, to see if they offered some additional insights into the sorts of questions I am interested in on metaphysics, and how to infer what might be real using reasoning and evidence. These were Samir Okasha’s Philosophy of Science, a Very Short Introduction, and Peter Godfrey-Smith’s Theory and Reality. Okasha is much shorter, is a quicker read than Godfrey-Smith, and gives a good overview of the history and current debates in philosophy of science. Godfrey-Smith covers more subjects, and in more depth, and is also very clearly written.
Neither book proved to be of much use in clarifying the issues I have been trying to address here. Okasha DID have one of his chapters dedicated to realism/anti-realism. But that chapter was focused on realism vs. instrumentalism – and both are consistent with the hypothetico-deductive falsificationism I have outlined here – they just differ in the degree to which one can apply trust to the conclusions drawn. Godfrey-Smith, in his longer work, dedicated multiple chapters to realism and a variety of alternatives. However, he was more of an advocate than Okasha, and the POV he was advocating was a version of realism in which he supported all of: common sense realism, scientific realism, empiricism, methodological naturalism, and metaphysical naturalism. In the process, he papered over the conflicts between these views – and as the details of these conflicts are particularly significant for the issues I am interested in, his otherwise excellent reference was not of much use in this project.
However, I can use these identified shortcomings in Godfrey-Smith to elaborate on these issues myself, in ways that may help highlight issues of interest.
I mentioned earlier the conflict between scientific realism and common sense realism. Common sense realism holds that our medium scale knowledge and experiences reflect a “real” world. This is a world in which apples exist, and are red, and solid and wet things are shiny, one can be virtuous, or morally flawed, and someone can be a friend, and calendars specify time. Scientific realism is often presented as physical reductionism – in which apples are just an assemblage of elementary particles, mostly filled with void and fields in between, and color is just an ascribed sensation to detecting a certain wavelength of reflected photons, and shininess is a way that coherently reflected light has a sharp angular peak and rapid drop off, and abstract objects like virtues and morality do not exist, and consciousness and any emotive feelings we have are an irrelevant byproduct of chemical reactions. Science need not be reductionist, and the dismissals of morality, emotions and abstract objects could be taken out of the above list, which would decrease the mismatch between common sense and scientific realism, but still leave a huge gap. I consider science to be committed to the non-reductive science realism I just described, which requires tossing much of common sense realism.
A reference which gets into this issue indirectly is Physicalism by Daniel Stoljar. Stoljar devotes most of this book to trying out, then refuting as inappropriate, one definition of physicalism after another. Ultimately, Stoljar rejects physicalism as an inappropriate remnant view left over from the 19th century, when the matter we knew was all similar to the macro-scale objects we manipulate in everyday life. Physicalism is best understood as a thesis that all matter is like macro solid matter. He concludes in his final chapter that physics has revealed the world to work so differently from our macro scale intuitions that physicalism is simply false, and the efforts to preserve the doctrine by recasting it to include the bizarre discoveries of modern physics have robbed the term of any useful content. This claim may not be a consensus view relative to physicalism, but if one substitutes “common sense realism” in the place of physicalism in this summary – there would likely be few philosophers or physicists who would argue with it. Commonsense realism is falsified by modern physics – and Godfrey-Smith’s efforts to paper this over do a disservice to this question.
The other two conflicts of note are between metaphysical naturalism – which is basically the assertion of physicalism as a presupposition, and both methodological naturalism and empiricism. Empiricism starts with observations – which foundationally, are our qualia. But the foundationalist empirical methodology, when applied to perception, tends to strongly support a dualist view – contrary to physicalism. And the presupposition of a particular metaphysical conclusion entails an unfalsifiable approach to doing science relative to metaphysics, which would make metaphysical naturalism a pseudoscience view, per the boundary definitions of methodological naturalism.
Based on my understanding of these points, one can be a scientific realist, an empiricist, and a methodological naturalist, with no conflicts. But if one accepts these POVs then one cannot be a common sense realist, nor may one ASSUME metaphysical naturalism. Physicalism could be a CONCLUSION from an investigation, but it cannot be a starting presumption.
What do you all think? Have I spelled out a consensus view of how science can and should work, and have I thoroughly and fairly explored the alternatives?
Well, this is 9.5 page OP – I hope you folks brought your reading glasses today!
B&B orig: 7/30/19
A Missing Foundation to Reason?
Post by dcleve
This is an extension of my earlier discussion on the Munchausen Trilemma, and the difficulty of justifying any belief. Here, I offer a discussion on the same theme, with a look at classical reasoning, and whether, or why, we can or should trust its validity.
A key starting point in my discussion is informed by the thinking about mathematics, and Euclidean Geometry in particular, over the last several centuries. For most of the history of thought, mathematics and geometry were treated as undeniable - they were just the way things were. Several centuries ago, however, mathematicians discovered they could make different postulates, and derive completely different math systems. This brought the "necessity" of our math into question, but whether this was substantive or just a peculiar aberration was not settled. UNTIL -- non-Euclidean math ended up being what fit our best model of physics.
This changed decisively how math is perceived. It is no longer considered undeniable, or a logical necessity. Instead, math is treated as an almost arbitrary formalism, which may or may not correspond with how the world seems to work.
Less widely recognized is that logic is also a system of formalisms, and is plausibly likewise subject to arbitrary alternatives.
This becomes clearer, when one realizes that set theory is a branch of mathematics, AND that set theory is a key feature of formal logic.
I suggest that the traditional view of logic as the “laws of thought,” “the rules of right reasoning,” or “the principles of valid argumentation” https://www.britannica.com/topic/philosophy-of-logic is incorrect. And that instead, one could postulate an almost infinite number of variant logics. This is a hypothesis that people have explored, and they HAVE come up with multitudes of self-consistent logic types, which would produce different truth outcomes from each other if applied to a problem.
I further suggest that we humans seem to have an inborn basic reasoning skill: https://www.scientificamerican.com/article/babies-think-logically-before-they-can-talk/ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.335.7003&rep=rep1&type=pdf
This basic reasoning is mostly effective, but has some major shortcomings: https://www.edge.org/conversation/daniel_kahneman-the-marvels-and-the-flaws-of-intuitive-thinking-edge-master-class-2011
When we use our inborn basic reasoning to critique itself, and then correct the discovered shortcomings, I think we basically end up with formal Aristotelian logic. What my suggestion holds, then is that reasoning is radically contingent, and the reasoning we have ended up with, was the result of evolutionary tuning for effectiveness. This would make reasoning, and the rules by which we think -- an empirical discovery, justified solely by its Darwinian success. This pragmatic/ success oriented, Darwinian justification for science and empiricism has generally been taken as DIFFERENT from the justification for accepting reasoning. The thinking I advance here, is that they are both solely pragmatically justified.
The problem I discuss, and possible answers to that problem, are closely related to an interesting essay by one of the more philosophically insightful physicists, Lee Smolin. https://arxiv.org/pdf/1506.03733.pdf Smolin's focus is on rejecting any variant of platonic idealism, or any reality to Popper's world 3. Smolin accepts my premise with respect to math, and by identifying logic as a subset of math, he agrees with the thrust of my discussion. He treats logic and math to be underlying features of physics, which could have been otherwise. How logic and math are created by physical substance, he does not know -- this would become a further project for physics to explore. I offer Smolin's essay, not as something I endorse, but as an indication that this is a subject, and question, which our exploration of the relation between physics and math/logic is forcing upon us.
My second image is a book cover for a book I have not read, but which illustrates the potentials for alternate logics. A portion of the abstract highlights the potential for alternate logics:
B&B orig: 6/12/19
This is an extension of my earlier discussion on the Munchausen Trilemma, and the difficulty of justifying any belief. Here, I offer a discussion on the same theme, with a look at classical reasoning, and whether, or why, we can or should trust its validity.
A key starting point in my discussion is informed by the thinking about mathematics, and Euclidean Geometry in particular, over the last several centuries. For most of the history of thought, mathematics and geometry were treated as undeniable - they were just the way things were. Several centuries ago, however, mathematicians discovered they could make different postulates, and derive completely different math systems. This brought the "necessity" of our math into question, but whether this was substantive or just a peculiar aberration was not settled. UNTIL -- non-Euclidean math ended up being what fit our best model of physics.
This changed decisively how math is perceived. It is no longer considered undeniable, or a logical necessity. Instead, math is treated as an almost arbitrary formalism, which may or may not correspond with how the world seems to work.
Less widely recognized is that logic is also a system of formalisms, and is plausibly likewise subject to arbitrary alternatives.
This becomes clearer, when one realizes that set theory is a branch of mathematics, AND that set theory is a key feature of formal logic.
I suggest that the traditional view of logic as the “laws of thought,” “the rules of right reasoning,” or “the principles of valid argumentation” https://www.britannica.com/topic/philosophy-of-logic is incorrect. And that instead, one could postulate an almost infinite number of variant logics. This is a hypothesis that people have explored, and they HAVE come up with multitudes of self-consistent logic types, which would produce different truth outcomes from each other if applied to a problem.
I further suggest that we humans seem to have an inborn basic reasoning skill: https://www.scientificamerican.com/article/babies-think-logically-before-they-can-talk/ http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.335.7003&rep=rep1&type=pdf
This basic reasoning is mostly effective, but has some major shortcomings: https://www.edge.org/conversation/daniel_kahneman-the-marvels-and-the-flaws-of-intuitive-thinking-edge-master-class-2011
When we use our inborn basic reasoning to critique itself, and then correct the discovered shortcomings, I think we basically end up with formal Aristotelian logic. What my suggestion holds, then is that reasoning is radically contingent, and the reasoning we have ended up with, was the result of evolutionary tuning for effectiveness. This would make reasoning, and the rules by which we think -- an empirical discovery, justified solely by its Darwinian success. This pragmatic/ success oriented, Darwinian justification for science and empiricism has generally been taken as DIFFERENT from the justification for accepting reasoning. The thinking I advance here, is that they are both solely pragmatically justified.
The problem I discuss, and possible answers to that problem, are closely related to an interesting essay by one of the more philosophically insightful physicists, Lee Smolin. https://arxiv.org/pdf/1506.03733.pdf Smolin's focus is on rejecting any variant of platonic idealism, or any reality to Popper's world 3. Smolin accepts my premise with respect to math, and by identifying logic as a subset of math, he agrees with the thrust of my discussion. He treats logic and math to be underlying features of physics, which could have been otherwise. How logic and math are created by physical substance, he does not know -- this would become a further project for physics to explore. I offer Smolin's essay, not as something I endorse, but as an indication that this is a subject, and question, which our exploration of the relation between physics and math/logic is forcing upon us.
My second image is a book cover for a book I have not read, but which illustrates the potentials for alternate logics. A portion of the abstract highlights the potential for alternate logics:
Science tells us what is true; that is science's prerogative. But the universe has beauty and goodness as well as truth. How reconcile and unify? The pancalistic answer is that the good and the true is so because it is beautiful. The final court of appeal is aesthetic. Nothing can be true without being beautiful, nor anything that is in any high sense good. The ascription of beauty, a reasoned, criticised, thought-out ascription of æsthetic quality, is the final form of our thought about nature, man, the world, the all.I know of at least one field that is a valid subject of study, and falls outside empiricism or reasoning, and that is aesthetics. That one can use aesthetics as a truth reference, and source of logic, which would be orthogonal to science, is a consequence of these observations. That someone tried to do this in a major philosophical work -- is unsurprising to me, but may be a surprise to some of the other posters here.
B&B orig: 6/12/19
The Münchhausen Trilemma
Post by dcleve
I am interested in a reasoning problem, which recurs repeatedly across multiple areas of my interest, and would like the members of this board’s thinking on this issue.
It is a standard requirement, widely accepted by those who are "reasonable", that one should only hold by beliefs that are sufficiently justified. This is the Principle of Sufficient Reason. What is sufficient per the PSR is not specified, but it is generally taken as less rigorous than logical proofs. Anti-rationalists, and anti-realists, have taken aim at the PSR, and challenged that it is ever satisfiable even at levels well below proofs. One of the problems that justifications of beliefs runs into is that there appears to be no valid way to terminate justifications. The name for this critique is the Munchhausen Trilemma, or sometimes the Agrippa Trilemma.
The core problem for justifications is that one can always ask how the justification itself is justified. There are three basic answers identified in the Trilemma – an infinite series of justifications, circular justifications, or holding that some claims or information do not need justification. Because an infinite series of justifications has never been, and never could be done by anyone – this solution is not achievable by anyone making a claim, and if needed, then nothing satisfies the PSR. Declaring that some data or statements are undeniable and basic, and do not need to satisfy the PSR – is to abandon the PSR. And circularity is considered a definitive refutation of reasoning, per formal logic. The name of the trilemma critiques circularity, as Baron von Munchhausen was fabled to have pulled himself and his horse out of a quagmire by lifting up on his own hair.
While the Trilemma does not generally play a major role in politics or biology, it has regularly appeared as a problem bedeviling the advocates of one POV or another in some related fields that are discussed on this board: Cosmology; Theology (particularly Cosmological arguments); Morality; and the basis of science, knowledge in general, and reasoning. I will offer a few examples of “solutions” that have been proposed.
Most theology tends to treat deities as a Special Pleading exemption to the PSR – that they have some special feature that makes their existence unquestionable – IE a basic truth or fact. This sort of argument, of course, is not convincing at all to those who do not accept that particular theology. Another approach is to hold that a deity is eternal – hence basically satisfies the infinite logic chain. That one can reasonably ask “why should one believe this exemption”, or “why did that particular eternal entity exist”, is a problem for theologians. Theologians often at least give SOME reasons for these assumptions – Aquinas’s 5 proofs of God for example is an effort to at least partially satisfy the PSR.
Cosmologists have generally taken the same approaches as theologians – some postulating that the universe is an infinite series (bouncing universe, steady state universe, and eternal inflation) thus satisfying the infinite regression, some holding that existence of A universe is a basic and undeniable fact (special pleading exemption from the PSR).
Hawking in various writings offered a variety of alternative claims – putting a lot of effort into trying to find ways around the trilemma. A few examples: in A Brief History of Time – he argued that a closed object in space-time need not have a cause (IE if the universe winked out of existence after existing for 60 billion years, not having anything there at the 62 billion year point, its transitory 60 billion year existence would basically have a “no harm, no foul” exemption). In this argument, he was drawing on the analogy of virtual particles -- which both exist and don't -- as they wink out of existence before they can do anything. However, the analogy does not work, in that the asserted existence of virtual particles itself requires justification (and has been justified), and one can ask for the justifications for the appearance, sustained existence, and then disappearance of his universe, AND for the justification of his exemption.
In Black Holes and Baby Universes, he argued that at its origin the universe was so small that time and space dimensions became commingled, hence the exact time of the universe’s origin is slightly indeterminate – and then claimed that anything with an indeterminate origin in time need not have a specified cause (exemption to the PSR). WHY this would be a valid exemption – is not clear, at least to me. And why it would not apply generally to everything (as per the Heisenberg uncertainly principle, the exact time of all events, including all origins, is somewhat indeterminate), and would therefore be a wholesale repudiation of the PSR in all applications – is unclear to me as well.
Many of the Greek thinkers considered geometry, mathematics, and reasoning to be unquestionable, and the rationalist program in philosophy has sought to ground all knowledge in these sort of rational truths. This is to hold that some basic logic facts are unquestionable, and need not satisfy the PSR. Following them, Frege, then Russell and Whitehead, tried to derive all knowledge from formal logic and mathematics. They each failed, and Godel showed why, with his incompleteness theorem. Meanwhile, the development of non-Euclidian geometry, and of non-standard logics, has undercut any claim that a particular logic or mathematics is “basic” or undeniable. Not only has other knowledge not been shown to be derivable from logic, but by my understanding of the state of the field, logic itself is now subject to the trilemma.
Science has also struggled with the Trilemma. A number of approaches are summarized: Descartes famously declared his selfhood, his reasoning, and God to be undeniable, and built up a worldview from these three foundational exceptions to the PSR. Phenomenalism treated sensation as an undeniable basic. And the Logical Positivists treated reasoning as undeniable, and scientific induction as a close enough approximation to reasoning. Naïve Realism holds that the external world is undeniable.
In opposition to these exemptions to the PSR, which were primarily from the first half of the 20th century, most philosophy of science in the last half century plus has taken a primarily circularity approach to justification. Quine argued for a radical wholism in which all of science is self-supporting. The latter Wittgenstein agreed all scientific propositions are questionable – but that one can’t question all of them at the same time. EO Wilson’s criteria of consilience to accept a claim is wholistic. And the ultimate justification to accept science and empiricism as a truth method is a circular empiricist argument that empiricism works well to gain knowledge!
The principle behind all of these is that if one makes the circle of a circular argument large enough, then at some point it is no longer a refuting fallacy to be circular. The baron may not have been able to pull on his own hair to get himself out of the mud, but if he pulled on his horses mane, the horse lifted its head and pushed against the baron, lifting him in the stirrups, then by gripping the horse with his thighs the baron may have been able to lift them out. All it took was a four step circle, not a two-step one ;-).
I am not convinced that any of these efforts to evade the Trilemma, in any of these fields, have been successful.
I welcome the insights of fellow posters on this question.
B&B orig: 4/7/19
I am interested in a reasoning problem, which recurs repeatedly across multiple areas of my interest, and would like the members of this board’s thinking on this issue.
It is a standard requirement, widely accepted by those who are "reasonable", that one should only hold by beliefs that are sufficiently justified. This is the Principle of Sufficient Reason. What is sufficient per the PSR is not specified, but it is generally taken as less rigorous than logical proofs. Anti-rationalists, and anti-realists, have taken aim at the PSR, and challenged that it is ever satisfiable even at levels well below proofs. One of the problems that justifications of beliefs runs into is that there appears to be no valid way to terminate justifications. The name for this critique is the Munchhausen Trilemma, or sometimes the Agrippa Trilemma.
The core problem for justifications is that one can always ask how the justification itself is justified. There are three basic answers identified in the Trilemma – an infinite series of justifications, circular justifications, or holding that some claims or information do not need justification. Because an infinite series of justifications has never been, and never could be done by anyone – this solution is not achievable by anyone making a claim, and if needed, then nothing satisfies the PSR. Declaring that some data or statements are undeniable and basic, and do not need to satisfy the PSR – is to abandon the PSR. And circularity is considered a definitive refutation of reasoning, per formal logic. The name of the trilemma critiques circularity, as Baron von Munchhausen was fabled to have pulled himself and his horse out of a quagmire by lifting up on his own hair.
While the Trilemma does not generally play a major role in politics or biology, it has regularly appeared as a problem bedeviling the advocates of one POV or another in some related fields that are discussed on this board: Cosmology; Theology (particularly Cosmological arguments); Morality; and the basis of science, knowledge in general, and reasoning. I will offer a few examples of “solutions” that have been proposed.
Most theology tends to treat deities as a Special Pleading exemption to the PSR – that they have some special feature that makes their existence unquestionable – IE a basic truth or fact. This sort of argument, of course, is not convincing at all to those who do not accept that particular theology. Another approach is to hold that a deity is eternal – hence basically satisfies the infinite logic chain. That one can reasonably ask “why should one believe this exemption”, or “why did that particular eternal entity exist”, is a problem for theologians. Theologians often at least give SOME reasons for these assumptions – Aquinas’s 5 proofs of God for example is an effort to at least partially satisfy the PSR.
Cosmologists have generally taken the same approaches as theologians – some postulating that the universe is an infinite series (bouncing universe, steady state universe, and eternal inflation) thus satisfying the infinite regression, some holding that existence of A universe is a basic and undeniable fact (special pleading exemption from the PSR).
Hawking in various writings offered a variety of alternative claims – putting a lot of effort into trying to find ways around the trilemma. A few examples: in A Brief History of Time – he argued that a closed object in space-time need not have a cause (IE if the universe winked out of existence after existing for 60 billion years, not having anything there at the 62 billion year point, its transitory 60 billion year existence would basically have a “no harm, no foul” exemption). In this argument, he was drawing on the analogy of virtual particles -- which both exist and don't -- as they wink out of existence before they can do anything. However, the analogy does not work, in that the asserted existence of virtual particles itself requires justification (and has been justified), and one can ask for the justifications for the appearance, sustained existence, and then disappearance of his universe, AND for the justification of his exemption.
In Black Holes and Baby Universes, he argued that at its origin the universe was so small that time and space dimensions became commingled, hence the exact time of the universe’s origin is slightly indeterminate – and then claimed that anything with an indeterminate origin in time need not have a specified cause (exemption to the PSR). WHY this would be a valid exemption – is not clear, at least to me. And why it would not apply generally to everything (as per the Heisenberg uncertainly principle, the exact time of all events, including all origins, is somewhat indeterminate), and would therefore be a wholesale repudiation of the PSR in all applications – is unclear to me as well.
Many of the Greek thinkers considered geometry, mathematics, and reasoning to be unquestionable, and the rationalist program in philosophy has sought to ground all knowledge in these sort of rational truths. This is to hold that some basic logic facts are unquestionable, and need not satisfy the PSR. Following them, Frege, then Russell and Whitehead, tried to derive all knowledge from formal logic and mathematics. They each failed, and Godel showed why, with his incompleteness theorem. Meanwhile, the development of non-Euclidian geometry, and of non-standard logics, has undercut any claim that a particular logic or mathematics is “basic” or undeniable. Not only has other knowledge not been shown to be derivable from logic, but by my understanding of the state of the field, logic itself is now subject to the trilemma.
Science has also struggled with the Trilemma. A number of approaches are summarized: Descartes famously declared his selfhood, his reasoning, and God to be undeniable, and built up a worldview from these three foundational exceptions to the PSR. Phenomenalism treated sensation as an undeniable basic. And the Logical Positivists treated reasoning as undeniable, and scientific induction as a close enough approximation to reasoning. Naïve Realism holds that the external world is undeniable.
In opposition to these exemptions to the PSR, which were primarily from the first half of the 20th century, most philosophy of science in the last half century plus has taken a primarily circularity approach to justification. Quine argued for a radical wholism in which all of science is self-supporting. The latter Wittgenstein agreed all scientific propositions are questionable – but that one can’t question all of them at the same time. EO Wilson’s criteria of consilience to accept a claim is wholistic. And the ultimate justification to accept science and empiricism as a truth method is a circular empiricist argument that empiricism works well to gain knowledge!
The principle behind all of these is that if one makes the circle of a circular argument large enough, then at some point it is no longer a refuting fallacy to be circular. The baron may not have been able to pull on his own hair to get himself out of the mud, but if he pulled on his horses mane, the horse lifted its head and pushed against the baron, lifting him in the stirrups, then by gripping the horse with his thighs the baron may have been able to lift them out. All it took was a four step circle, not a two-step one ;-).
I am not convinced that any of these efforts to evade the Trilemma, in any of these fields, have been successful.
I welcome the insights of fellow posters on this question.
B&B orig: 4/7/19
America's Broken Politics: Moscow Mitch
America's downward spiral in political rhetoric includes labeling people and political figures with various names. For unknown reasons, the 'Moscow Mitch' label applied to Senate majority leader McConnell has set him off. He claims he has been called a Russian asset, unpatriotic, unAmerican, etc. as well, which is probably true. This ~3½ minute video discusses the origin of the label as coming from former republican congressman Joe Scarborough on MSNBC's Morning Joe show. McConnell blames left wing media for the outrageous label, and finds it deplorable that political rhetoric has sunk to such a low level.
This 5½ minute video shows the some of the public in Kentucky as not being happy with McConnell, calling him Moscow Mitch.
A local newspaper in Kentucky reported that McConnell, touts "his record reshaping the federal judiciary and how he "saved the Supreme Court for a generation" by blocking President Barack Obama's pick in 2016. He bragged about his reputation as the "Grim Reaper" for killing the progressive measures coming out of the Democratic-controlled House." That kind of rhetoric sounds rather in-your-face, so maybe it is not surprising that he is being targeted by harsh rhetoric. What goes around sometimes comes back around.
The Los Angeles Times commented: "Last month the Democratic-controlled House approved the Securing America’s Federal Elections (SAFE) Act, which requires that states use “individual, durable, voter-verified” paper ballots” during federal elections. The House also has appropriated an additional $600 million in aid to the states to enhance election security, a recognition that more federal assistance is needed to help update archaic election systems.
But the Republican majority in the Senate continues to block action on the SAFE Act and other legislation inspired by Russia’s interference, including proposals to require candidates to report offers of information from foreign countries."
It would seem that there is nothing wrong or partisan about congress trying to defend elections and requiring candidates to report offers of information from foreign countries, which is something the Trump campaign refused to do in 2016. Presumably, Trump will again refuse to do that if the Russians or anyone else offers to help his campaign in the 2020 elections, legal or not.
Given McConnell's actions and rhetoric in the Senate, or more precisely, his open pride in being the Grim Reaper, are labels such as Moscow Mitch unfair? Does it matter that, by his silence, McConnell condones name calling, including racist comments, by some populists, GOP politicians and especially Trump? Is the name calling more socially harmful than helpful? Under current political circumstances, is there another, less emotional, path for American politics to follow? Or, is this rancor and hate the only plausible way forward at present?
B&B orig: 8/4/19
This 5½ minute video shows the some of the public in Kentucky as not being happy with McConnell, calling him Moscow Mitch.
A local newspaper in Kentucky reported that McConnell, touts "his record reshaping the federal judiciary and how he "saved the Supreme Court for a generation" by blocking President Barack Obama's pick in 2016. He bragged about his reputation as the "Grim Reaper" for killing the progressive measures coming out of the Democratic-controlled House." That kind of rhetoric sounds rather in-your-face, so maybe it is not surprising that he is being targeted by harsh rhetoric. What goes around sometimes comes back around.
The Los Angeles Times commented: "Last month the Democratic-controlled House approved the Securing America’s Federal Elections (SAFE) Act, which requires that states use “individual, durable, voter-verified” paper ballots” during federal elections. The House also has appropriated an additional $600 million in aid to the states to enhance election security, a recognition that more federal assistance is needed to help update archaic election systems.
But the Republican majority in the Senate continues to block action on the SAFE Act and other legislation inspired by Russia’s interference, including proposals to require candidates to report offers of information from foreign countries."
It would seem that there is nothing wrong or partisan about congress trying to defend elections and requiring candidates to report offers of information from foreign countries, which is something the Trump campaign refused to do in 2016. Presumably, Trump will again refuse to do that if the Russians or anyone else offers to help his campaign in the 2020 elections, legal or not.
Given McConnell's actions and rhetoric in the Senate, or more precisely, his open pride in being the Grim Reaper, are labels such as Moscow Mitch unfair? Does it matter that, by his silence, McConnell condones name calling, including racist comments, by some populists, GOP politicians and especially Trump? Is the name calling more socially harmful than helpful? Under current political circumstances, is there another, less emotional, path for American politics to follow? Or, is this rancor and hate the only plausible way forward at present?
B&B orig: 8/4/19
Friday, August 2, 2019
Opinion vs. Libel
A Washington Post article describes court action in a $250 libel suit a Kentucky high school student brought against the Washington Post. This illustrates how fuzzy the line between libel and opinion can be.
What is the standard for libel?: WaPo writes that “the judge’s opinion cited case law noting that statements must be ‘more than annoying, offensive or embarrassing’. They must expose the allegedly libeled party to public hatred, ridicule and contempt, among other damaging elements. ‘And while unfortunate, it is further irrelevant that Sandmann was scorned on social media’, the judge wrote.”
One can see at least one basis for an appeal in the judge’s comment that Sandmann was scorned on social media. These days, polarized people tend to react or over react on social media. That arguably amounts to exposure to public hatred, ridicule and/or contempt. The Supreme Court, being the polarized political beast it is now, could rule 5-4 against the WaPo, arguing that the scorn Sandmann was subject to on social media amounted to libel.
The WaPo’s defense, i.e., everything it reported was either true, was opinion and/or was not directed at Sandmann, will be seen through the eyes of partisan Supreme Court judges. Being human, judges cannot help but see truth and non-truth through their own personal filters. When political partisans are picked for judges, one gets partisan result s based on partisan versions of facts, non-facts, truth and non-truth.
Just how thick or thin is the ice that America's free press skates on? Maybe this case will shed some light on that in a couple of years. If President Trump has his way, the WaPo would lose and be sued into oblivion. Will Trump judges feel the same way? That’s the interesting question.
B&B orig: 7/27/19
U.S. District Judge William O. Bertelsman ruled that seven Post articles and three of its tweets bearing on Nicholas Sandmann — who was part of a group of Catholic students from Kentucky who came to Washington to march against abortion — were protected by the First Amendment. In analyzing the 33 statements over which Sandmann sued, the judge found none of them defamatory; instead, the vast majority constituted opinion, he said.
“Few principles of law are as well-established as the rule that statements of opinion are not actionable in libel actions,” Bertelsman wrote, adding that the rule is based on First Amendment guarantees of freedom of speech. “The statements that Sandmann challenges constitute protected opinions that may not form the basis for a defamation claim.”
Sandmann’s parents, who brought the suit on their son’s behalf, said they would appeal. “I believe fighting for justice for my son and family is of vital national importance,” Ted Sandmann said in a statement. “If what was done to Nicholas is not legally actionable, then no one is safe.”
In his suit, Nicholas Sandmann claimed that the “gist” of The Post’s first article, on Jan. 19, was that he “assaulted” or “physically intimidated Phillips” and “engaged in racist conduct” and taunts.
“But,” the judge wrote, “this is not supported by the plain language in the article, which states no such thing.”
What is the standard for libel?: WaPo writes that “the judge’s opinion cited case law noting that statements must be ‘more than annoying, offensive or embarrassing’. They must expose the allegedly libeled party to public hatred, ridicule and contempt, among other damaging elements. ‘And while unfortunate, it is further irrelevant that Sandmann was scorned on social media’, the judge wrote.”
One can see at least one basis for an appeal in the judge’s comment that Sandmann was scorned on social media. These days, polarized people tend to react or over react on social media. That arguably amounts to exposure to public hatred, ridicule and/or contempt. The Supreme Court, being the polarized political beast it is now, could rule 5-4 against the WaPo, arguing that the scorn Sandmann was subject to on social media amounted to libel.
The WaPo’s defense, i.e., everything it reported was either true, was opinion and/or was not directed at Sandmann, will be seen through the eyes of partisan Supreme Court judges. Being human, judges cannot help but see truth and non-truth through their own personal filters. When political partisans are picked for judges, one gets partisan result s based on partisan versions of facts, non-facts, truth and non-truth.
Just how thick or thin is the ice that America's free press skates on? Maybe this case will shed some light on that in a couple of years. If President Trump has his way, the WaPo would lose and be sued into oblivion. Will Trump judges feel the same way? That’s the interesting question.
B&B orig: 7/27/19
Subscribe to:
Posts (Atom)