Post by dcleve
This discussion is third in a series of three related discussions, the first is
The Münchhausen Trilemma, and the second is
A Missing Foundation to Reason?.
I think the question of what science and empiricism should consist of, and what alternate views are credible on this, is pretty important to me, and of interest to this board. I try dialog with other thinkers, and for me a crucial question is whether one can do science about, and/or use science conclusions, to derive metaphysical views. Can one use reasoning, and evidence, to evaluate what might or might not be true about the metaphysics of our world? Whether this is possible, or not, I think is a pretty significant question. I do this, and do so using both reasoning, and empiricism, and I consider this combined suite of tools to be metaphysical naturalism. The method I use for empiricism is the Indirect Realism which I consider common across all of science.
I undertook to investigate the credibility of the Indirect Realism model which I use, and what its possible alternatives are, and I will discuss what I think I discovered in my (possibly incomplete) research on how to do science. My focus tends to be primarily on dualism/physicalism, and will later extend to God/Spirit questions, as these are the major questions I am hoping to support the discussion of!
What I will try to describe is both a worldview, and a methodology. The methodology is hypothec-deductive empiricism. I consider this methodology to be widely generalized in human practice, and to be formalized in the sciences. There are several key features to this methodology:
• Observations generate preliminary “facts”
• Speculation about those “facts” and the postulation of possible/plausible explanations
• Recognition that the truth/falsity of those explanations is not directly determinable (provided they satisfy logic and coherence criteria), and that there can be many such explanations of potentially significantly different forms (theory is underdetermined by evidence)
• Evaluation between possible explanations is done by testing, by making predictions with them, and seeing if those predictions work, or not.
• While an explanation may end up very, very very well supported, it is still not certain, and never will be.
• “Facts” could be primary data, but need not be, and generally are not. Most observations are heavily theory-laden, and hence almost all “facts” are actually just well-supported explanations.
• Facts and explanations therefore build upon each other, creating a complex multi-tier structure of assumptions, upon assumptions, upon assumptions, upon assumptions, many times over. Most questions of interest rely upon a extensive and often unidentifiably complex tier of these assumed explanations, which have proven highly reliable, but none of which are certain.
• Note that this makes most “facts” themselves fallible. Uncertainty and fallibilism are baked in to this methodology. This methodology is, and has to be, robust to accommodate error.
• This methodology is “foundationalist” in structure, but does not rely on unquestionable first principles or observations. It is compatible with reductionism and also with emergence, or other alternatives.
The worldview is:
• realism – there is an “out there”.
• It is indirect realism, in that we don’t have access to most of the reality of what is “out there” and must infer/postulate it, with our “explanations”.
• These explanations, as one applies successive processes of test, refutation, and revision, are gradually bringing us closer to a “true” understanding of what is “out there.”
• We do have limited direct access, and that access is our consciousness.
• Consciousness gives us knowledge of essential qualia, and provides us with a starting point for reasoning.
• Despite being “direct” these too are fallible inputs, and we often reason incorrectly, and can be mistaken about qualia.
• Direct access to our qualia, and our reasoning, is further complicated by a feature of human minds – that while consciousness is critical when learning a subject, once a problem is mastered consciously, we embed that mastery in unconscious processes, and are no longer aware of how we solve it. Our neurology is such that many of the models we initially develop consciously, shift to become embedded as unconscious brain functions. So our “perceptions”, of things like a spherical shape, functional objects like chairs, etc are not qualia, but are post-processed integrations of qualia data into higher level constructs. This post-processing is done by our unconscious, and experience of these model-constructed perceptions is often overlaid upon, or supplants, the actual experience of qualia. This confuses our ability to sort what is or isn’t “foundational” through introspection.
• This is true of multiple key complex “explanations” that our thinking relies upon. As toddlers we develop a theory of self, a theory of the world, a theory of other minds, a set of corrections to our intrinsic reasoning, and a language ability. All of these are networks of complex explanations and working models, which we may have been aware of developing while we were toddlers, but have since then been driven into our unconscious operating assumption set, and the memory of our constructing them is generally lost. This radically obscures our ability to recognize our qualia through introspection.
As initially articulated here, this model of the empirical methodology and of the world could lead to an underestimate of the complexity of thinking, and of the scientific process. Explanations do not just rely upon data, but they come in bundled collections of interlocking explanations, which often have to be evaluated as a collective, rather than each being evaluated individually. Those networks can also often be easily “tweaked’ with secondary assumptions that do not affect either predictive power, nor the recognizability of the core assumptions when a contrary observation is discovered such that the importance of “critical refuting tests” can be overstated.
Both of these weaknesses (looking at questions in to much isolation, and overstating what could be shown by critical tests)were present in Popper’s early articulation of many of these ideas. These weaknesses are correctable, and can be corrected with a variety of tweaks to “explanations” and “falsification”. Treating explanations as a collective, with the collective including many of both primary and secondary hypotheses is one method. Another is to treat explanations as a family of closely related views, not as distinct individuals. A third is to treat a family of approaches to explanations as a common “research programme”, and look not to explicit falsifications, but to exploratory utility in research, or cumulative explanatory failings as support or refutation. And tests can be treated as not individually definitive, but as contributors to either consilience, or of decoherence, as measures of goodness of an explanation family. These alternatives use somewhat different terms and methodologies to evaluate science claims, but follow variants of hypothetico-deductive/falsification based thinking.
There are a variety of significant implications, both for science, and for overall worldview, that the above construct supports.
One of the most noteworthy is the near-universality of science. Empiricism, after all, is not limited to science. We use empirical principles when learning most subjects – such as how to kick a ball, or how to fletch an arrow. We also use variants of the hypothetic/deductive/falsification process in most aspects of reasoning. Science is then embedded in a broader view of how we humans think and interact with the world in general, and is in a continuum with the rest of practical life. This leads to no SUBJECTS being excludable from science. One of the major questions in science has been the “boundary problem” and this POV does not delimit the boundaries of science by subject at all.
Instead, methodology is crucial. WITHIN any arbitrary subject field, one can make almost any kind of idea or explanation untestable, if one crafts it to be so. It is this lack of testability (and therefore refutability) of explanations that makes an idea non-scientific. And, given the universality of the basic methodology to all empiricism and reasoning – lack of testability then becomes a fundamental sin in any field. NO “irrefutable” explanation would then be valid in ANY empirical OR reasoning question, as testing and falsifiability are universal requirements for valid thinking in or outside science.
This was an issue that first drove Popper to try to define the demarcation – when he found both Freudian psychology, and Marxist historicism, to both claim to be scientific, but because they were irrefutable, he realized they could not be science, or valid in any way. Subsequent thinking about of both disciplines is that each COULD be articulated in ways that they ARE falsifiable, but that the advocates of both chose to instead craft them as unfalsifiably flexible claims. So once more, it is not the subject , or the type of hypothesis, that is non-scientific, but instead the way it is structured by its advocates. This point clarifies a significant issue relative to empiricism, and the need for falsifiability. Both Marxist and Freudian thinking claimed to do testing. BUT, their tests were all confirmations, as the way they structured their theories, there COULD only be confirmatory tests! They were referencing a Humean model of empiricism, where truth is established by confirmatory tests, and one gets the greatest “confirmations” with a explanation that is compatible with all outcomes – IE irrefutable. This reinforces Popper’s insight - fasifiability is needed for actual science. Confirmations invite confirmation bias, which in its purest form is unfalsifiable pseudoscience.
The boundary between science and craftsmanship, or science and philosophy is a fuzzy one. Science is characterized by formalization of the basic methodologies used in its field, and formalization is an analog process. And sciences have historically developed out of other larger fields, and the maturation of “formalism” for a particular science is an incremental development. Therefore, the boundary of science cannot be readily specified by field, or formalism, or maturity. The only well defined boundary is with pseudo science and irrefutability.
This model also has a bearing on a variety of significant metaphysical questions around the boundaries of science.
• A variety of claims have been made that science must assume physicalism. But in this model, physicalism is a complex explanation, which is built up from a network of assumptions, all of which were developed using the empirical process. IE physicalism is an EXPLANATION, and MUST be testable, if it is to be valid. And testing physicalism cannot presuppose physicalism, hence no – science CANNOT presuppose physicalism, and such a claim qualifies as the most rejected category of claim: pseudo science.
• Even further than the point above about how physicalism can not be assumed – this model is in several ways intrinsically contrary to physicalism. Physicalism after all is inferred from more fundamental information and models, and these model are by definition NOT physicalist. Hence, for physicalism to be “true” it has to be inferred from prior models which would then need to be re-interpreted, requiring its founding justifications to have either major correction or rejection as fundamentally mistaken, when seen in physicalist terms. The physicalist worldview then is in conflict with its own supporting rationale.
• There are strong opinions which have been expressed on both sides of the concept of studying (or deriving) morality or ethics as a science question. This view is supportive of the idea in concept, as all possible subjects are capable of becoming sciences. But the process of maturation and formalization of a field of study must proceed to a significant degree before a field can become a science. The question then becomes, not whether this COULD become a science – to that the answer is “yes”. But instead the question is whether the concepts, and methodologies of developing moral thinking are mature and settled to the degree of being a “science”? To which I consider the answer to be a pretty definitive “no”!
• There is a claim that studies of non-physical fields are NOT science, by definition from the first point above. This would include parapsychology, astrology, and non-traditional medicines. By the terms of this model of science, this claim is false. All of these COULD be legitimate subjects of science. To be so, they would need to be characterized in refutable terms, and the study of them advanced to the point that a methodology and concepts can be at least partially formalized. If those fields are not science today, they can be legitimately studied, with a goal of turning them into a science.
• There is a claim that Gods, spirits, or dualism cannot be studied by science. This claim is sometimes justified by the first claim above, that science must be physical, but is also sometimes justified by the claim that none of Gods, spirits, nor dualism can be characterized in a refutable fashion. This claim is also false, per the model I have outlined. These subjects are not intrinsically untestable. God claims CAN be testable – Epicurus specified one such test in the Problem of Evil. The issue, is that most advocates of God and spirit claims, have deliberately crafted them to be untestable. It is the advocates at fault here, not the basic ideas they are working with.
In evaluating alternatives to this view I have outlined, I tried looking at a variety of sources for plausible alternatives. One set of references provided me with critiques of realism. Here is a link the SEP’s “Challenges to Metaphysical Realism” https://plato.stanford.edu/entries/realism-sem-challenge/ . I also posted a question to Philosophy Stack Exchange about what the reasons were to question indirect realism, and the most useful answer pointed to Sellars’ “
Empiricism and the Philosophy of Mind.”
I did not consider any of these references to provide a particularly useful alternative to realism. The SEP discussion is primarily semantically based, and I don’t see semantics arguments as nearly powerful enough to base any kind of metaphysics on. How, for example, could philosophers intuitions about whether Oscar on earth and Toscar on Twin-earth have the same or different knowledge of H2O-water or XYZ-water, provide the basis to draw ANY conclusion about reality or metaphysics? And while describing the Great Deceiver hypothesis in terms of “brain in a vat” is logically self-contradictory, one should not draw any broad conclusion from this, when one can just as easily describe it in terms of an “evil demon” which is not contradictory at all. And Sellars’ argument against foundationalism is similarly “semantic”, as it relies upon some linguistic forms being necessary for “knowledge” while others are insufficient. If one does not speak English, or accept his definitions, or accept his usage claim, then his conclusion that one cannot do foundationalism – is simply unsupported. Also – bizarrely -- both Sellars and Putnam are realists, yet they are the primary source material cited by anti-realists in these references!
My takeaway from these efforts to find a credible argument for anti-realism was – the only case of consequence was that made by poster Geoffrey Thomas in response to my Stack Exchange question, who argued that we cannot actually distinguish between indirect realism and instrumentalism.
I also reviewed the Internet Encyclopedia of Philosophy, which in its Object of Perception entry https://www.iep.utm.edu/perc-obj/ outlines a variety of alternatives to the indirect realism I described above, at least in understanding perception. These are: direct realism, phenomenalism, Intentionality or representationalism, disjuctivism, and cognitive externalism. This page also listed three main objections to indirect realism, which were:
i. Dualism
Many see a problem with respect to the metaphysics of sense data. Sense data are seen as inner objects, objects that among other things are colored. Such entities, however, are incompatible with a materialist view of the mind. When I look at the coffee cup there is not a material candidate for the yellow object at which I am looking. Crudely: there is nothing in the brain that is yellow. Sense data, then, do not seem to be acceptable on a materialist account of the mind, and thus, the yellow object that I am now perceiving must be located not in the material world but in the immaterial mind. Indirect realism is committed to a dualist picture within which there is an ontology of non-physical objects alongside that of the physical.
ii. Adverbialism
Some see the argument from illusion as begging the question. It is simply assumed, without argument, that in the non-veridical case I am aware of some thing that has the property that the stick appears to me to have. It is assumed that some object must be bent. One can, however, reject this assumption: I only seem to see a bent pencil; there is nothing there in the world or in my mind that is actually bent. Only if you already countenance such entities as sense data will you take the step from something appears F to you to there is an object that really is F. Such an objection to indirect realism is forwarded by adverbialists. We can illustrate their claim by turning to other everyday linguistic constructions, examples in which such ontological assumptions are not made. “David Beckham has a beautiful free kick” does not imply that he is the possessor of a certain kind of object -- a kick -- something that he could perhaps give away or sell in the way that he can his beautiful car. Rather, we take this to mean that he takes free kicks beautifully. When one gives a mean-eye, one looks meanly at somebody else; one does not offer them an actual eye of some kind. Similarly, then, when one perceives yellow one is sensing in a yellow manner, or yellowly. Our perception should be described in terms of adverbial modifications of the various verbs characteristic of perception, rather than in terms of objects to which our perceptual acts are directed.
iii. The Veil of Perception
Indirect realism invokes the veil of perception. All we actually perceive is the veil that covers the world, a veil that consists of our sense data. What, then, justifies our belief that there is a world beyond that veil? In drawing the focus of our perception away from the world and onto inner items, we are threatened by wholesale skepticism. Since we can only directly perceive our sense data, all our beliefs about the external world beyond may be false. There may not actually be any coffee cups or olive oil tins in the world, merely sense data in my mind. However, for this to be a strong objection to indirect realism, it would have to be the case that direct realism was in a better position with respect to skepticism, but it is not clear that this is so. The direct realist does not claim that his perceptions are immune to error, simply that when one correctly perceives the world, one does so directly and not via an intermediary. Thus, things may not always be the way that they appear to be, and therefore, there is (arguably) room for the sceptic to question one-by-one the veracity of all our perceptual beliefs.
Objections i and iii constitute examples of fallacious motivated reasoning. That one WANTS materialism to be true, and WANTS certainty and WANTS to reject radical skepticism as definitively refuted – are not actually valid reasons to reject indirect realism. Objection ii presupposes adverbialism, which I will discuss the weaknesses of later.
There were two things that struck me in particular in this SEP discussion. First – as a pragmatic empiricist, I would really like to see a discussion of whether, if one applies indirect realism to perception, whether one can make predictions and pass test cases with it. And I know of three such tests – each of which have shown dramatic success. If indirect realism is true, then babies should develop the processing capabilities to build up their internal models over time, while in direct realism this should be “direct”. And babies DO build up this capability over time, with clearly observed tiers of models! Also, one should be able to sometimes see base qualia, and sometimes the digested “perceptions” with complex concepts pre-embedded, under different circumstances. And we also see this. (As an aside – the inability of our introspection to clearly reveal qualia is often cited as an argument against indirect realism. But this is no objection, as this model predicts that qualia will be only occasionally perceivable, and they will comingled with perception of various tiers of post-processed models and concepts). And we should also be able to – once we start dissecting the neurology, see how our neural structure sorts and binds to arrive the interim processing states (face detection, smile/frown, etc) that build up the complex concepts that a well trained neurology also provides us instead of or in addition to qualia. And we have seen these neural segmentation and binning processes in action. These are EMPIRICAL SUCCCESSES, where modern neuroscience, perceptual psychology, and child development all support the indirect realism model. When one is discussing the validity of a model about how our world works, this should be critical information! Yet this discussion only mentions one of these in passing, and dismisses it as a “weak” argument for indirect realism!
Also the rhetoric of indirect realism not being “naturalized” strikes me in particular as silly. All that is meant, is that it is not easily integrated with materialism, which has nothing to do with methodological naturalism! As none of these alternate approaches to perception involve tossing out indirect realism in physics, chemistry or biology, indirect realism is a common feature of methodological naturalism as applied to every other field. Darwin did indirect realism to infer the theory of evolution, the valence theory of chemical bonding was developed under an indirect realism approach to valences, and the quark theory of subatomic particles is also an indirect realism theory. Science, and methodological naturalism, still would rely upon the indirect realism inference process, should one accept any of these alternative views of perception! So, what these alternatives call for is a SPECIAL PLEADING EXCEPTION for perception, from this otherwise universal methodological naturalism. IE these alternate theories of perception are NON-NATURALIZED, only the indirect realism model is naturalized! Basically, the entire reason these alternatives are being considered at all for perception, is because most of the adherents of the other views think materialism is refuted if one applies indirect realism to perception too!
Of the alternatives to indirect realism, direct realism is the most commonly held in the public at large. This view is often called common-sense realism. It holds that things like chairs, the redness of apples, and the beauty of a sunset, are really out there, and directly perceived by us. Common sense realism is in explicit conflict with science, where chairs are not a part of physics but instead are basically an invented purpose for objects that conveniently have a useful shape to sit on, apples are not red and this is easily illustrated by looking at an apple under low light or colored light, and the “beauty” aspect of a sunset is created by us and our psychological conception of beauty. In addition to science refuting direct realism, so does imagination. The stop sign in my minds eye right now – is perceived, yet it is not real. Perception is therefore NOT direct! I do not consider direct realism to be at all credible.
Meanwhile, this reference argues that direct realism provides no more certainty vs. radical skepticism than indirect realism does, AND that there are materialist variants of indirect realism (one of them from Sellars no less), so
neither of the preferences motivating the rejection of indirect realism are even justified.
Phenomenalism, the idea that perceptions are all there is, and ALL physical objects are just mental artifacts, at least passes some of the first order refutation tests that direct realism fails. But when one tries to make predictions with it – IE that since we seem to experience a shared world, our phenomena should be shared – these predictions of phenomenal universality fail pretty much utterly. “Redness” is not perceived by the color blind, and the beauty of sunsets has a variable phenomenology. Phenomenalism was a popular view for decades, but it has almost disappeared due to failed predictions.
Adverbialism and intentionalism are both part of what I at least consider a grossly mistaken linguistic turn to philosophy in the middle of the 20th century, in which what one can say about the world, was considered, somehow, to define how the world really was. Language as a precondition for perception can be fairly easily refuted by test. Most perception is non-verbal – as you can easily illustrate. Look up from one’s screen, and just look around oneself. THEN, specify, in words, EVERYTHING one has just seen! How many pages of type will be required to describe even a single perspective from a stationary POV? And did you just perceive this? I hope you realize the answer is an unquestionable ”yes”! Yet how much of that scene translated to words in your head, while you were looking? 1%? Less? 0%!!!? A linguistically defined theory of perception – is simply wrong.
Disjuctivism premises that “real” perceptions are different from illusory perceptions (like my imagined stop sign), even if we cannot tell them apart. As such, it is an untestable assertion of dogma, and is pseudoscience, and simply wrong.
Of the alternatives to Indirect Realism, the one which has the most current vigor is cognitive externalism. There are a variety of very interesting ideas in cognition that are based on aggressive approaches to externalism, and
a hint about this can be drawn from this SEP page. A link that I unfortunately lost, provides what I found to be a very useful understanding of what cognitive externalists were doing. It described them as starting philosophy “in the middle” with humans interacting with the world. Humans, world, interaction, and cognition become starting premises from this “middle” point, and one can then build up to a theory of science, and down to a fundamental metaphysics from there. But because of the extended nature of that middle-state interaction, this approach does not neglect the weak boundaries of human, world, cognition etc, unlike the way foundationalist or reductionist thinking does.
I applaud the insights that extended cognition thinking offers, about how many of our mental artifacts, such as language, are highly interactive, and agree with its general anti-reductionist attitudes. However, the “start in the middle” approach appears once more to be based on motivated reasoning. Starting from the bottom – tends toward dualism (or reductive physicalism, or idealist phenomenalism). Starting from the middle – IF one specifies the middle entities one starts with “properly” – can lead to extended non-reductive physicalism, as the extended cognition advocates mostly do. But if rather than “humans, cognition, world, and interaction”, one started with “world, self, other minds, reasoning, and interaction” as one’s initial pragmatic realities, then one would plausibly instead arrive at non-reductive InteractiveTriplism, rather than non-reductive materialism. So while extended cognition leads to some useful insights about cognition itself, and strengthens the case for non-reductive worldviews, the approach used to argue for it once more looks like an unjustified “wish it were so” effort to find a special pleading rationalization to defend physicalism.
In addition to the online encyclopedias, I also reviewed two recent highly thought of introductory books on philosophy of science, to see if they offered some additional insights into the sorts of questions I am interested in on metaphysics, and how to infer what might be real using reasoning and evidence. These were Samir Okasha’s Philosophy of Science, a Very Short Introduction, and Peter Godfrey-Smith’s Theory and Reality. Okasha is much shorter, is a quicker read than Godfrey-Smith, and gives a good overview of the history and current debates in philosophy of science. Godfrey-Smith covers more subjects, and in more depth, and is also very clearly written.
Neither book proved to be of much use in clarifying the issues I have been trying to address here. Okasha DID have one of his chapters dedicated to realism/anti-realism. But that chapter was focused on realism vs. instrumentalism – and both are consistent with the hypothetico-deductive falsificationism I have outlined here – they just differ in the degree to which one can apply trust to the conclusions drawn. Godfrey-Smith, in his longer work, dedicated multiple chapters to realism and a variety of alternatives. However, he was more of an advocate than Okasha, and the POV he was advocating was a version of realism in which he supported all of: common sense realism, scientific realism, empiricism, methodological naturalism, and metaphysical naturalism. In the process, he papered over the conflicts between these views – and as the details of these conflicts are particularly significant for the issues I am interested in, his otherwise excellent reference was not of much use in this project.
However, I can use these identified shortcomings in Godfrey-Smith to elaborate on these issues myself, in ways that may help highlight issues of interest.
I mentioned earlier the conflict between scientific realism and common sense realism. Common sense realism holds that our medium scale knowledge and experiences reflect a “real” world. This is a world in which apples exist, and are red, and solid and wet things are shiny, one can be virtuous, or morally flawed, and someone can be a friend, and calendars specify time. Scientific realism is often presented as physical reductionism – in which apples are just an assemblage of elementary particles, mostly filled with void and fields in between, and color is just an ascribed sensation to detecting a certain wavelength of reflected photons, and shininess is a way that coherently reflected light has a sharp angular peak and rapid drop off, and abstract objects like virtues and morality do not exist, and consciousness and any emotive feelings we have are an irrelevant byproduct of chemical reactions. Science need not be reductionist, and the dismissals of morality, emotions and abstract objects could be taken out of the above list, which would decrease the mismatch between common sense and scientific realism, but still leave a huge gap. I consider science to be committed to the non-reductive science realism I just described, which requires tossing much of common sense realism.
A reference which gets into this issue indirectly is Physicalism by Daniel Stoljar. Stoljar devotes most of this book to trying out, then refuting as inappropriate, one definition of physicalism after another. Ultimately, Stoljar rejects physicalism as an inappropriate remnant view left over from the 19th century, when the matter we knew was all similar to the macro-scale objects we manipulate in everyday life. Physicalism is best understood as a thesis that all matter is like macro solid matter. He concludes in his final chapter that physics has revealed the world to work so differently from our macro scale intuitions that physicalism is simply false, and the efforts to preserve the doctrine by recasting it to include the bizarre discoveries of modern physics have robbed the term of any useful content. This claim may not be a consensus view relative to physicalism, but if one substitutes “common sense realism” in the place of physicalism in this summary – there would likely be few philosophers or physicists who would argue with it. Commonsense realism is falsified by modern physics – and Godfrey-Smith’s efforts to paper this over do a disservice to this question.
The other two conflicts of note are between metaphysical naturalism – which is basically the assertion of physicalism as a presupposition, and both methodological naturalism and empiricism. Empiricism starts with observations – which foundationally, are our qualia. But the foundationalist empirical methodology, when applied to perception, tends to strongly support a dualist view – contrary to physicalism. And the presupposition of a particular metaphysical conclusion entails an unfalsifiable approach to doing science relative to metaphysics, which would make metaphysical naturalism a pseudoscience view, per the boundary definitions of methodological naturalism.
Based on my understanding of these points, one can be a scientific realist, an empiricist, and a methodological naturalist, with no conflicts. But if one accepts these POVs then one cannot be a common sense realist, nor may one ASSUME metaphysical naturalism. Physicalism could be a CONCLUSION from an investigation, but it cannot be a starting presumption.
What do you all think? Have I spelled out a consensus view of how science can and should work, and have I thoroughly and fairly explored the alternatives?
Well, this is 9.5 page OP – I hope you folks brought your reading glasses today!
B&B orig: 7/30/19