CHAPTER TEN
Theory and Reality
Evolution is only a theory. It is not a fact.
—State of Oklahoma, 2003, House Bill HB 1504 (Isaak, 2005)
JUST A THEORY
“But isn’t evolution just a theory?” It is not unusual to hear such a question when the subject of evolution arises. It is an interesting question, as there is much implied behind it. First, of course, it is largely intended as a rhetorical question and hence not a real question but an implicit statement: “Evolution is just a theory.” If we remove the word just it becomes a relatively trivial statement, as it is common knowledge that evolution is a biological theory. “Just” is clearly an important word in this statement then. It implies a lack of something, a “mere-ness,” being less than in some way. Less than what? When prodded, the answer provided is that it is merely a theory rather than a “fact.” The word fact carries a lot of weight and holds an elevated sense of epistemic significance, whereas the word theory for many people seems to hold little epistemic significance. Part of the problem is the equivocal nature of the term theory in colloquial English. It is used in various fields and studies and likely can be given a unique definition for its use within each. For some reason the epistemically weaker senses of these seem to stick with many in the nonscientific world, leaving them to conceive of scientific theory as nothing more than speculation, a guess, an unsupported “just-so” story.
On the other hand, “facts” are seen by many as epistemically superior, as true in an absolute sense. Of course, many do not recognize that “fact” is an equivocal term in our language as well. One interesting difference in meaning is that at times it is understood, as alluded to earlier, as referring to that which is true and as referring to that which actually exists. Other times it can be understood to refer to what is merely alleged to be true and to exist, but in actuality may or may not. Even if we univocally accept the former sense as the meaning of “fact,” other problems arise. To say that a fact is true or refers to what actually exists is merely an analytic truth; it is true by definition. What such a claim elides is the appropriate or adequate means of determining what does and what does not qualify as a fact. Without such an understanding, the accusation that a theory is just a theory and not a fact is vacuous.
But not only vacuous—such an accusation is also misguided, as it appears to misconceive or misinterpret the meaning of terms like fact and theory within the contexts of scientific investigation and scientific knowledge. Within the context of science, those things usually referred to as “facts” are actually not very interesting. “Facts” are typically observations. They are particular and limited: On such and such a date at such and such a time water boiled at 100°C at sea level. A fact such as this tells us nothing about water in general or how it will behave at other times in other places. Nor does it tell us about nature in general, the process of evaporation, the relation of heat and air pressure, and so forth. These more general subjects are the fruitful and interesting aspects of scientific knowledge and investigation. Merely recording a particular fact is far less interesting and fruitful. A fact is at best, then, a building block. It has value within the context of other facts, reasoning processes, and theory. Yet by itself it has little significance.
A theory, on the other hand, is the boldest, most powerful statement made in science. Theories take us beyond the particular to the general, beyond empirical phenomena to underlying processes, from a plurality of experiences to a unified understanding. They make sense of so-called facts. They are the expressions of what most people think of when they think of science. They provide access to that supposed arcane, almost mystical world that is science to many lay people. Some might say theories are what science is. Yet, the difficulties some students might have with “theory” are not wholly unfounded. Within science and the philosophy of science, theory is not a completely univocal or unproblematic concept. It both carries certain inherent problems and raises further problems within the study and practice of science. What follows then is a close look at the concept of theory as it is used in science and the problems and questions it raises.
THE EPISTEMOLOGY OF SCIENTIFIC THEORIZING
Another misconception that leads many to underestimate the epistemic value of scientific theories is the confusion and conflation of the concepts of theory and hypothesis. Typically, a hypothesis is put forward by a scientist as a likely answer to a question about the nature of the world. According to Peirce (1955), hypotheses are inferred through the employment of abductive reasoning. Once posited as a likely answer, then, a hypothesis is tested: phenomena that would occur given the truth of the hypothesis are deductively inferred. Whether those phenomena occur is empirically observed and recorded. From these observations, further inferences can be made. One occurrence of an expected phenomenon does not transform a hypothesis into a theory. There is a problem of precision here, because one cannot specify how many affirmative experimental results move a hypothesis to the realm of theory—due to the problems and limitations of inductive reasoning as outlined in Chapter 8. Yet, we can leave it vaguely enough to say that repeated affirmative results eventually transform a hypothesis to a theory. Once at that point, the theory takes on a more powerful epistemic authority. Yet, again due to the limits of inductive reasoning, it never has absolute epistemic authority. This limited epistemology leads to further problems and questions, problems and questions that extend some of the questions regarding unobservable entities from the previous chapter and lead from problems of epistemology to problems of metaphysics.
As the boldest statements of science, theories can be seen to do the hard work of science. It is with theories that we find the purposes of description, prediction, and explanation (noted in Chapter 7) enacted and fulfilled. Prediction was more broadly covered in Chapter 8. Explanation is covered in Chapter 11. Hence, for the rest of this chapter, we focus on theoretical description.
THE METAPHYSICS OF SCIENCE
One of the commonly accepted purposes of science is to describe the world, to tell us what is really there. On the one hand, this function seems rather simplistic. By simple empirical observation, the world can be described. Yet, descriptions of the unobservable world have been central to scientific knowledge since the beginning of modern science and have progressively taken a larger role ever since—from Newtonian gravity to viruses, electrons, and bosons (the so-called god particle). The unobservable entities referred to in the previous chapter are also often referred to as theoretical entities, as their existence is predicated on the acceptance of a certain theory or theory structure. Deny the theory, deny the entity. But of course, as noted again and again here, modern science is based on an empirical outlook. To posit the existence of that which is not empirically verifiable seems at odds with a strict empiricism. This tension has led to a number of metaphysical views on this supposed descriptive function of science. These views can largely be placed into one of two general camps: scientific realism and scientific antirealism. We will look at these views in general, the reasons for and against them, and a few more specific views within these two general camps.
SCIENTIFIC REALISM
The term scientific realism seems straightforward enough but turns out to be more complicated than it might at first seem. What it means first is that scientific theories are true. This is an epistemic claim and thus one about statements: the statement of an accepted scientific theory is “true.” Those scare quotes suggest that we need a closer look at this concept of truth. Philosophers have proposed various concepts of “truth.” The one typically implied in scientific realism is the correspondence theory of truth, which means that truth is defined as a correspondence between the meaning of a statement and a state of the world. If a statement accurately reflects (corresponds to) a state of the world, then it is a true statement. Thus, if the statement of a theory corresponds to a state of the world, it too is true. If the germ theory of disease, which states that certain illnesses are caused by microscopic organisms, is true, then the world must be such that certain illnesses indeed are caused by microscopic organisms. This correspondence then leads to two other assertions as part of the thesis of scientific realism: scientific theories correctly describe what observable and unobservable things there are, and scientific theories correctly describe how those things are related. In other words, if genetic theory is correct, there are indeed structures called “genes,” which determine physical and behavioral traits of living things and are responsible for the inheritance of these traits from previous generations. Further, if genetic theory is correct it also accurately describes the relations between genes, DNA, and other biochemical structures. If atomic theory is correct, then matter is in fact composed of tiny entities called molecules, which are composed of more basic entities called atoms, which are composed of other, more basic entities, including protons, neutrons, and electrons. If gravitational theory is correct, then there truly is a force among material objects that explains why unsupported objects fall to the earth and the fall of these objects can be predicted by the inverse square law of gravity.
But the complications do not end there. As there are so many theories and so many have fallen by the wayside throughout history, the scientific realist cannot accept every theory as fulfilling all these metaphysical functions. Only the strongest or most mature theories can be accepted as fulfilling these metaphysical commitments. Of course, as noted earlier, the division between hypothesis and theory is unclear as it is; to try to divide immature and mature theories simply furthers such obscurity. Further, the claims of scientific realism may not be consistent with those of what might be called “commonsense realism.” Commonsense realism (or naive realism) might be described as the way in which most people (especially those who happen to be nonphilosophers and nonscientists) understand the world to be. According to commonsense realism, there is a world existing “out there” independently of any person’s mind, perceptions, or beliefs that is at least roughly equivalent to our understanding of how the world is. Some individuals might misperceive or misunderstand the world from time to time, but generally we have direct access to the world through our senses and our minds. So we can know that the world is as we understand it to be. The problem is that many of modern science’s claims have turned out to be inconsistent with “common sense.” Prior to Copernicus and Galileo, common sense told people that the sun revolved around the Earth. Even as the scientific world began to accept the heliocentric view, much of the nonscientific world still accepted geocentrism, because it seems like the sun revolves around the Earth. To assert the other way around seems counterintuitive. Yet, by our point in history heliocentrism (at least as far as our solar system goes) has been absorbed into what we accept as common sense. But scientific realism would still affirm claims that would appear counterintuitive to commonsense realism. Common sense tells me, for example, that the desk on which my laptop presently rests is a solid object. Modern physics, however, tells me that it is mostly empty space and the apparent solidity of the desk is due to forces between atoms and between subatomic particles. These counterintuitive claims may also contribute to many people’s distrust of science and low estimation of “scientific theory.” It is still possible, however, that counterintuitive claims of science might be absorbed into commonsense realism, resolving the tension with but a short cultural lag.
No-Miracles Argument and Continua
The most common argument for the thesis of scientific realism is what is called the “no-miracles argument” (Hacking, 1983; Smart, 2008). This argument points to the obvious and prodigious successes of science: from technological achievements (flight, space exploration, the eradication of smallpox and near eradication of many other diseases, etc.) to the predictive powers of science to the basic ability to manipulate nature. According to this argument, the best explanation for why science has been so successful is that it is, in addition to being predictively and manipulatively powerful, descriptively accurate about both the observable and the unobservable world. The implication of the name of the argument is that if science did not accurately describe the world, its successes would seem to be a miracle. If science were wrong about the way the world is, it would not be as successful as it clearly is. This argument is a form of abductive reasoning or “inference to the best explanation.” That is, of the possible explanations for the success of science, the claim that it accurately describes the world (the thesis of scientific realism) seems the most plausible, that is, the best explanation. Advocates for this argument attempt to strengthen it by claiming that scientists themselves use this form of argument to justify the truth and accuracy of their own individual theories. Thus, the use of this argument regarding science in general reflects this more piecemeal use by scientists themselves. However, that reasoning may be its own undoing. It becomes a question-begging or circular argument. One cannot support a reasoning process with the same reasoning process, merely at a different level of analysis. Of course, this question-begging aspect of the argument is merely extra support provided to the argument. The no-miracles argument can still be addressed on its own. On its own, its status as an abductive argument must be taken into account. Abductive reasoning is presumptive; it infers to the best or most plausible explanation. And just as occurs with its use in hypothesis creation, such presumption requires further support and investigation. In other words, to further establish the abductive conclusion of scientific realism, such reasoning must occur again and again. That is, the success of science must continue and continue as well to imply scientific realism, leaving us with a thesis supported by abductive reasoning itself supported through inductive reasoning—leaving the critic of scientific realism with much room for doubt.
Another argument in favor of scientific realism raises the question of the distinction between observables and unobservables explored in the previous chapter (Maxwell, 1962). This argument challenges this supposed distinction and affirms rather a continuum between that which can and that which cannot be observed. As we move from small, difficult to see items, such as a grain of salt, to microscopic entities such as bacteria, to smaller and smaller entities, there is no principled point at which one can assert a clear distinction between that which is observable and that which is unobservable (i.e., theoretical). Any line one draws would be arbitrary, not based on a principled difference. The limitation is not an ontological one in which the supposed unobservable is of questionable existence, but a problem of human perception. If we had evolved with different sensory organs, we might see the world differently, see things we do not now, such as viruses, electrons, and so forth. We may have evolved to see x-rays or hear the infrasonic calls of elephants. In such cases, the world would appear quite different to us (and the word infrasonic would be defined somewhat differently), but the world would actually be the same. It does not make sense to limit reality by the limitations of human sense organs.
This line of reasoning leads also to questioning the skepticism of observation prosthetics like microscopes, telescopes, and electron microscopes. A continuum can be noted in such cases also: from the simple magnifying glass (the images seen through which are not typically doubted), to the microscope, to the electron microscope, to the cloud chamber. Again, asserting no clear line between detecting what is observable and really there, and detecting what is merely theoretically and possibly not there, if we accept the reality of the magnifying glass image, we must accept these technologically produced images all the way down the line. The problem of basing these arguments on the imputation of continua rather than a clear, principled distinction is that our world is rife with such continua, yet we apply practical, meaningful distinctions nonetheless. The growth from childhood to adulthood is a continuum, yet the distinction between child and adult is a useful and meaningful one. Even the journey from life to death may be seen as a continuum, yet that distinction clearly is meaningful. Although in these examples there may be some marginal cases (adolescents, brain-dead persons, and vegetative patients), these marginal cases do not destroy the distinction that is so useful and meaningful in the great majority of cases.
Structural Realism and Entity Realism
One form of scientific realism that has received much attention the last couple decades is structural realism (Worrall, 1989). This form focuses less on the existence of things or entities and more on the existence of the relations between things, especially as expressed in equations like the inverse law of gravitation. According to the view of structural realism, often in science theoretical entities do not “survive” changes in theory due to new data and new experimentation, but the structural relations between them do. Thus, it is epistemically more justified to assert the existence of these structural relations rather than the supposed entities between which they are at any point in time said to exist. There is some practical support for this view, as such relations (especially in the form of mathematical equations) have become central to many sciences, as well as the primary tools of further investigation. However, this support might also be a weakness, as this view might seem better suited for sciences expressed in terms of mathematics (such as physics or chemistry) than for less mathematically oriented sciences (such as biology). Furthermore, this view might be weakened by examples of scientific progress, leading to changes not just in theoretical entities but in theoretical structure as well. In many cases, theoretical structure may endure through changes in theoretical entities, but if theoretical structure changes also due to new evidence and new theories, the reality of structure may become just as uncertain as that of entities.
Whereas structural realism posits the existence of theoretical structures rather than theoretical entities, the theory of entity realism takes the reverse position. This theory is most identified with the work of philosopher Ian Hacking (1983, 1982/2000). Hacking does not appeal to the no-miracles argument but takes a more pragmatist position—in the philosophical sense of that word. Our belief in the existence of theoretical entities should be based not on logical inference but on scientists’ use of these entities as instruments and tools: “engineering, not theory, is the proof of scientific realism about entities” (Hacking, 1982/2000, p. 199). Hacking (1982/2000) uses the example of an electron gun named PEGGY II. This instrument of physics uses “various well understood causal properties of electrons to interfere in other more hypothetical parts of nature” (Hacking, 1982/2000, p. 193). It sprays electrons in order to uncover new phenomena. Hacking argues that this standard of belief in the reality of things is employed in the realm of observable entities also. We believe in the existence of observable entities not simply because we see them (directly) but “because of what we do with them, what we do to them, and what they do to us” (Hacking, 1982/2000, p. 193). They have practical effects on our lives and projects. Analogously, with the utilization of such devices as PEGGY II, because electrons have such effects also, we can similarly believe in their existence. One limitation is that though we might be able to say we can believe in the existence of unobservables like electrons, we cannot say what they are like. We can note their “well understood causal properties” but nothing about the nature of the electrons (or other unobservables) themselves. But is this not part of what we usually mean when we note the existence of an entity, some understanding of the essence of the thing? Also, are there entities we cannot manipulate, use as tools and instruments toward the acquisition of further knowledge? It seems this understanding of realism would leave those out, though they may well exist, as a matter of principle.
ANTIREALISM
The no-miracles argument and the specific theories of realism still leave room for doubt, for positions that might be called antirealism, that science is unable to accurately describe the world or that we are unable to have confidence that science is ever able to do so. First, let us survey a couple of general arguments raised against scientific realism, particularly in response to the no-miracles argument. The no-miracles argument points to the success of science as evidence that scientific theory accurately describes the world. The problem is that so many theories and so many theoretical entities have fallen into the dustbin of scientific history that it seems quite possible that theories and theoretical entities accepted today may as well. This argument, called the pessimistic meta-induction, is an induction in that it infers from the past “mistakes” of science to the claims of today. It is a meta-induction because it is an inference about the specific inferences of science itself regarding theories and theoretical entities. And it is pessimistic because it raises doubt about the truth of theories and existence of theoretical entities largely held to be true or existing at the present time. Another general argument against scientific realism is based on the underdetermination thesis, which says that due to the inductive methods of science the evidence for any theory does not fully determine (prove) the truth of that theory. This leaves open the possibility of the same phenomenon being explained by more than one theory, in which both (or all) theories are supported by empirical evidence of equal quality and quantity. This may sound odd, but consider an analogy from criminal law. A person is murdered. Shot once. The prosecutor has two suspects. There is not enough evidence to demonstrate that one rather than the other committed the crime. It is theoretically possible that the prosecutor could choose to simultaneously but separately prosecute both suspects and in fact convict both suspects. Given the epistemic standards of criminal law, the prosecutor need only prove guilt “beyond a reasonable doubt.” It is possible that such a level of evidence could be brought against both suspects. The contention here is not one of conspiracy: that one actually shot the gun and the other was involved in some way. The contention, a rather impossible one, is that in the first suspect’s trial, this person pulled the trigger and killed the victim, and in the second suspect’s trial, this person pulled the trigger and killed the victim. Now, certainly it is not true that both suspects performed the same act, but truth in that sense is not what criminal trials are about. They are about “truth” as defined by the epistemic standard of “beyond a reasonable doubt.” And it is possible that there is enough evidence to prove the guilt of each and both of these suspects to that standard. Similarly, in science it is possible that there may be more than one theory related to a specific phenomenon, and that each theory can be proven to the epistemic standards of science to an equal degree, leaving us with possibly inconsistent claims to what reality is.
Attempts to get past the under-determination thesis typically appeal to other standards than evidence for choosing between theories to find the “correct” one, factors that may make one theory appear to be more plausible. One of these standards might be the coherence of the theories themselves; one of the theories may appear more coherent than the other. A less-coherent theory may have aspects of it that are not as yet fully explained or understood. It may not all quite fit together yet and may seem less plausible. A similar standard is that the better theory will be more coherent with existing theories. If we presume that accepted knowledge and theory comprise a solid knowledge base, the competing theory that fits better with this knowledge base will seem more plausible. A third standard is the employment of Occam’s razor, the principle that the simplest explanation is usually the best, or that we should not needlessly multiply entities (also known as the principle of parsimony). The theory that explains the phenomenon with the fewest theoretical processes, steps, or entities would seem more plausible. As commonly invoked as many of these plausibility standards are, the general problem with them is that plausibility is a weak claim to knowledge and truth. Also, each has been contravened at some point in the history of science. Regarding the coherence of a theory, it is entirely possible that seemingly incoherent theory can become more coherent following future study and discovery. Regarding the coherence of a theory with accepted theories, as we have to assume that our present theoretical knowledge base is correct, we ignore the possibility of a future revolution (paradigm shift) that might overturn this knowledge base. And with Occam’s razor, the simpler theory may in fact neglect important aspects of explanation that the more complex theory takes into account, which could then be the more accurate, truer theory. Ultimately, none of these are the kinds of rational, empirical standards that are typical of scientific justification. They appear to be much weaker standards: rules of thumb or possibly standards based on mere preference.
THE PROBLEMS OF EMPIRICISM
Locke, Berkeley, and Hume
It might seem common sense that an empirical position on epistemology would automatically produce a realist position on metaphysics. Yet, things are not that simple and common sense may once again lead us astray. Consider again the problems raised in the previous chapter regarding the observation of unobservable entities and empiricism. Those problems extend to these questions of metaphysics. If we look back to the classical empiricists and the more general question of metaphysical realism (as opposed to the narrower question of scientific realism), we see an interesting variety of views. John Locke held a form of naive realism. Epistemologically, his empiricism led him to a representational theory of knowledge. This means that what we know directly are “ideas” in the mind. The sensual world impresses itself on our minds through the senses, and it is these impressions and ideas that we “know.” These ideas, because they were impressed on our minds by the external world, represent this external world. Thus, we know the ideas that are the contents of our mind; these ideas represent the real world; transitively, we have knowledge of the real world. The problem is, as you should be able to predict from reading the previous chapter on observation, that this position only presumes that our ideas are accurate representations of some real, existing objects external to our minds.
George Berkeley, on the other hand, refused this presumption. We know, directly, the ideas in our minds, but we cannot know that these ideas are representations of some other things outside our minds. All we have, according to Berkeley, are our ideas. There is nothing outside our minds. According to Berkeley, then, there is no such thing as matter. Everything that exists is mental: either an idea or an entity that has ideas. Yet, how is it that we seem to have similar ideas? If we all look at a tree, we all see a tree. If there is no material tree causing this idea in each of us, how is it we all have the same idea of a tree? These ideas, says Berkeley, must come from God. According to Berkeley, although the tree is not real as a material object, it is still real. It is real as an idea provided to us by God. The interesting thing here is that he argues away the material world, leaving us with a form of metaphysical idealism: all that exists are mental entities like ideas and minds. Like many philosophers of his era, Berkeley was reacting largely to new developments in science. His religious orientation (along with this theistic thesis here, he was also an Anglican Bishop) did not sway him from seeing and accepting the importance of the new sciences. By arguing away matter and replacing it with divinely produced ideas, he was able to keep the theories and laws discovered by science (relations between entities) without the existence of matter (material entities). In addition, he made room for God. As science progressed through the Enlightenment, it may have appeared to some as replacing God. Much of what was once explained by God is now explained by physics and other modern sciences. Mechanistic physics and qualities that inhere in matter like inertia seemed to push God out of the picture. By removing matter and replacing it with God-produced ideas, God can be brought back to the center of the modern worldview. Hence, Berkeley does describe a scientific realism of a sort, but it is not a realism that includes a concept of matter as most positions of scientific realism would.
David Hume was far more equivocal on this question than either Locke or Berkeley. He did not uncritically accept the existence of matter, as Locke, nor did he flatly deny such existence, as Berkeley. The skepticism Hume demonstrated regarding the use of induction, as outlined in Chapter 8, also led him to be philosophically skeptical of the material world but recognize the practical necessity of belief in it. Ultimately, Hume did not address these questions of metaphysics much. One must extrapolate from his writings to reach any such conclusions. His skepticism about induction and causality can imply a skepticism about what exists—particularly as one might understand causality as a real force in the universe existent between actual entities. Therefore, although an empiricist, and arguably the most consistent empiricist of the British empiricist school, Hume was most likely agnostic regarding the question of material realism. Metaphysical questions, in general, were not questions he had much concern for. Empirically, we can assert and verify claims regarding our sensations, but to posit a world beyond and independent of our sensations begins to extend one’s knowledge claims past the empirical and into the speculative, metaphysical realm.
Logical Positivism, Phenomenalism, and Instrumentalism
The logical positivists, reflecting an influence of the classical empiricists and Hume especially, similarly had little regard for metaphysical questions, even those regarding scientific realism. Many logical positivists espoused a view known as phenomenalism (not to be confused with phenomenology), which is an extreme empiricist view that denies that claims about external objects are really not about external objects but about our perceptions (the phenomena that exist within our minds). Phenomenalism does not actively deny the existence of an independent, external world but, much like Hume, asserts that all we can make claims about are those sensations we directly perceive. To go beyond that is to go beyond what is empirically verifiable.
Another view, similar to that of the logical positivists on this issue, that questions scientific realism is instrumentalism. According to instrumentalism, theories are not to be evaluated on the basis of their description of reality but on their usefulness as tools (instruments), especially as tools for predicting phenomena in the natural world. Whether theories accurately describe the world as it is, according to instrumentalism, is not relevant and is beside the point. If a theory accurately predicts phenomena regarding some aspect of the world, then that is justification for accepting the theory—as useful and validated, if not as “true.” The logical positivists accept much of this but do not qualify as instrumentalists because they recognize more than a predictive function of theories, a function that will be covered later in this chapter. Both positivists and instrumentalists interpreted theoretical claims in a somewhat metaphorical sense. As the existence of theoretical entities like electrons can be neither verified nor completely refuted, statements about electrons are really shorthand, metaphorical statements about the observable effects we theoretically attribute to electrons.
CONSTRUCTIVE EMPIRICISM
One of the more interesting antirealist positions is the view developed by Bas van Fraassen, which he calls constructive empiricism (1980). Note again, we have a form of empiricism, which does not lead directly to a form of realism. This demonstrates the important distinction between epistemology and metaphysics, and the presumptions many lay people make about their sense impressions. This view is empiricist in the usual sense of the word that knowledge is gained through sensory experience. It is constructive in the sense that the world that we can know through science is that world that we “construct” through empirical study. But what does it mean to “construct” our world?
Constructing in this sense is an oblique reference to epistemological and metaphysical views influenced by Immanuel Kant. Kant attempted to reconcile the epistemological views of the continental rationalists and the British empiricists by inferring a collection of innate structures of the mind (what he called concepts or categories) through which sense impressions are “filtered” to construct knowledge. He borrowed but amended the rationalist thesis that the mind contains innate ideas from birth. Rather than ideas (e.g., God, the noncontradiction principle), though, the mind is comprised of these innate structures, such as causality, unity, and plurality, that allow us to make sense of the sense impressions we experience. But these structures do not have content like ideas; they are formal in nature. From the empiricists he took the idea that without sense experience there is no knowledge—or more specifically, no content to knowledge. That content comes from sense experience. This constructed knowledge corresponds to what Kant calls the phenomenal world. This is the world that we can know through our phenomenal experience. The complication is that there is also another world according to Kant. This is the world beyond our sensations, the world as it is “in itself.” If you hold a pencil in your hand, you can list a host of sensations by which you would say you know the pencil: the yellow color, the smoothness of the paint, the sponginess of the eraser, and so forth. As the empiricists pointed out, these are all phenomena that we experience directly within us. And this is so under Kant’s analysis also, as we structure these phenomena into coherent knowledge through the innate concepts of the mind. This other world, called the noumenal world, is the world as it exists in itself beyond and independent of our perceptions of it. We cannot know this world, particularly as Kant understands what it means to “know.” However, when it comes to the phenomenal world, because the innate structures of the mind are a priori and universal (they do not vary from person to person), we can achieve certainty (especially through the rigor of empirical science) about our knowledge of this world. This is the sense, then, for Kant, that we “construct” the world we experience and our knowledge of it.
Van Fraassen does not attempt to deduce innate structures of the mind or concern himself with a proposed world beyond our sensations. He does, however, accept only a world we can construct through our perceptions of it. He rejects the views of logical positivism and instrumentalism, which accept theories as metaphorical or symbolic of observation statements, in their denial or agnosticism regarding the correspondence of these theories to unobservable entities. He accepts theoretical statements as they stand but does not hold them as “truth-claims.” Put simply (perhaps too simply), when it comes to theories he replaces “truth” with the concept of “empirical adequacy.” To assert the truth of a theory would imply adequate knowledge of a correspondence between the statements of the theory (both observation and theoretical) and a world beyond and independent of our senses. As knowledge is limited by sense experience, such assertion of truth is either fruitless or meaningless. Empirical adequacy, on the other hand, is not asserted as a truth claim but “displayed” (van Fraassen, 1980). It is displayed through empirical investigation. A theory is empirically adequate if “what it says about the observable things and events in this world are true—exactly if it ‘saves the phenomena’ ” (van Fraassen, 1980, p. 12). As an empiricist he allows truth to be predicated of observable things and events, if our sense experience corresponds statements of such things to our observations. A theory must correspond to what we can observe—in the past, the present, and the future. This is what is meant by “saving the phenomena.” Unexpected empirical results or other empirical data that do not fit into the theory raise problems and prevent empirical adequacy. If the theory “saves the phenomena,” then what it says about the unobservable world could be true. Epistemically, that is as much as can be hoped for.
Van Fraassen’s view depends on a distinction between observable and unobservable entities. Because theories refer to unobservable entities, they are not susceptible to determinations of truth, but merely empirical adequacy. Theories also refer to observable entities, of which truth claims can be made and the empirical adequacy of the theory as a whole depends on. But, as noted earlier, some philosophers have challenged this distinction, arguing in favor of a continuum between the observable and unobservable that makes any distinctive line drawn between them arbitrarily placed and thus not a rational, justifiable distinction (Maxwell, 1962). Van Fraassen responds to these critics that “observable” is a “vague predicate.” Our world and our language are rife with vague predicates: attributed qualities that, in many cases, are clearly and easily predicated but not so easily in some marginal cases. “Bald” is commonly raised as an example of a vague predicate. The term is often applied to men who are not completely devoid of head hair. Yet how much lack of head hair, how much exposed scalp must there be to appropriately apply the term? There are some men who are clearly and unequivocally bald and some clearly not bald. There are also marginal cases, but these marginal cases do not invalidate the general application of the concept. Even asserting a continuum between the observable and the unobservable, there is a clear difference between observing the moon through a telescope and “observing” subatomic particles in a cloud chamber. Somewhere between those extremes, van Fraassen argues, there is a distinction even if we cannot nonarbitrarily indicate where. Contrary to Maxwell, the fact that human sense organs limit our capacities to sense the world around us is ontologically relevant, according to van Fraassen. For the only world we can assert as “there” is the world we construct out of our sense experiences. He makes a much closer connection (as Kant did) between epistemology and metaphysics than Maxwell (1962) seemed to.
POSTMODERNISM AND REALITY
Perhaps the most radical and controversial metaphysical position on science and scientific theory is social constructionism (sometimes social constructivism). This idea grew out of the broader movement of postmodernism in philosophy and sociology especially. Postmodernism is a contemporary skeptical movement that challenges modern claims to knowledge, especially knowledge as part of a unified system like science. Postmodernism was given a strong voice in the sociology of science with the advent of the Strong Program in social science, initiated by sociologists David Bloor (1976) and Barry Barnes (1974) at the University of Edinburgh. In the philosophy of science, Thomas Kuhn was a major influence on postmodern thought—whether he would admit to such an influence or not. His paradigm-oriented analysis of science and scientific change downplayed if not totally dismissed the role of rational deliberation and calculation in the acceptance of scientific theory, including the replacement of old theories by new. This analysis suggested the type of epistemic relativism indicative of postmodern thought. Postmodern thought has spread through the fields of the humanities and social sciences since the 1970s, but has had much less effect and influence in the natural sciences. Because one of its central tenets tends to be relativism (epistemic and sometimes moral), it has met with much resistance from more traditional and more conservative researchers and theorists, who often find postmodern thought sloppy and vacuous, compared to the precision and rigor of analytic philosophy and empirical science.
Social constructionism is a common view found among postmodernists of various academic fields and backgrounds and one that takes Kant’s sense of constructed knowledge and world as its jumping off point. But like van Fraassen, social constructionists separate from Kant in refusing to make any claims regarding innate, universal categories of the mind and in disregarding any ideas of a noumenal world beyond the senses. Unlike van Fraassen, social constructionists do not elevate sense experience as the ultimate standard of knowledge. There is among postmodernists and social constructionists a recognition of the problems of observation noted by the likes of Hanson (1958/2000) and recognition of the influence of nonrational (social, cultural, and psychological) factors as illustrated by the likes of Kuhn (1962). The claim of social constructionists is that many entities that have been uncritically accepted in their essence may not really have that essence traditionally associated to them. These supposed essences are influenced strongly by social forces, which skew the way we see and understand the world. Oftentimes these reputed essences reflect the function of retaining status quo power relations in society. For example, philosopher Michel Foucault famously analyzed such concepts as madness (1961/1988), illness (1963/1994a), homosexuality (1976/1990), and even the basic structure of knowledge (1966/1994b). According to Foucault, these are concepts constructed through discourse, writing, power, and other social and cultural practices, rather than “natural kinds.” Interestingly, just as more conservative commentators resisted relativistic thinking for moral and political reasons, social constructionists often also had moral and political motivations. By demonstrating that previously believed natural entities and concepts are in fact constructed—often by those in power to retain power—and by revealing this constructedness (deconstructing supposed essences), new paths of being are opened up. People are not restricted by false essences meant to define them in a narrow sense. This allows us to question our presuppositions about who is and is not insane or ill and how we treat such people, as well as what homosexuality means and what it means to apply labels of sexuality.
Many, especially critics of postmodernism and social constructionism, assume that to affirm an entity or concept as a “social construction” is to deny its reality. This idea of social constructionism spread through academic disciplines in the decades of the 1970s through 1990s such that it took on the appearance of a fad. And certainly critics like Alan Sokal and Jean Bricmont made such charges (1998). Nearly everything imaginable was described as a social construction, from gender to nature to quarks to postmodernism itself (Hacking, 1999). Although, through this plethora of concept application was much equivocation. Social construction did not always go “all the way down.” That is, some concepts of social constructionism allowed for the possibility of some independent existence on which we apply social interpretation. If we change this interpretation we essentially change our concept though by leaving some reality beyond. Or, as Hacking (1999) demonstrates, a concept like child abuse may be based on real, independently existing behaviors. These are behaviors that we aggregate and collect within a human constructed concept like child abuse. Indeed, even when social construction goes “all the way down,” simply because something is constructed does not mean that it does not exist. The fact that it is constructed implies that it exists. The difference between more traditional views about naturally occurring, independently existing entities and social constructionism is that under social constructionism entities are conceived as far more plastic and malleable. Even though homosexuality may be a social construction that does not mean that there may not be homosexuals as that term is understood. There is an echo of Berkeley in this. Just because our perceptions are not based on sensations of physical matter does not make them unreal. But because homosexuality is a construction of our culture and society, not God or nature, we can change our ideas surrounding it, our treatment of homosexuals, our presumptions about what kinds of people homosexuals are.
In a famous or infamous example of social constructionist analysis, sociologists of science Bruno Latour and Steve Woolgar (1986) observed the laboratory at the Salk Institute. Taking jobs as lab technicians, they acted as anthropologists observing a foreign culture. They noted the negotiations, power relations, rhetoric, and other nonrational factors that went into knowledge construction. The knowledge produced in this lab (and by extension, in labs in general), they attempted to illustrate, was not a result of simple empirical and logical investigation and inference. Rather, it was the result of many psychological, cultural, and social factors that earlier philosophers of science (e.g., the logical positivists and Karl Popper) defined as distinct from science as a concept and science as a practice. Yet, what they intended to show was that these nonrational factors are and have been integral to the creation of what we call scientific knowledge. Furthermore, as social constructionists, they held that such laboratories do not take a reality that is simply sitting there, passive (a collection of brute facts), and draw knowledge out of it. They described, rather, such laboratories as factories in which raw data goes in at one end and an artifact called scientific knowledge comes out at the other end. In Latour and Woolgar’s (1979/2000) terms, reality is the consequence of this process, not the cause of it (p. 204). Perhaps the most interesting aspect of this view of knowledge construction (called Actor-Network Theory) is the diminished role of intentional human action. Humans (scientist-humans, lab-tech-humans, etc.) are reduced to their function within the lab. Even when Latour and Woolgar write of the negotiation and resolution of disputes among scientists that are inevitable parts of the knowledge construction process, this seemingly human behavior is described more in terms of robotic action. All the humans fill their roles like the parts of a machine and out comes knowledge. This diminishment of the human, rational contribution has certainly been met with much resistance and criticism (e.g., Amsterdamska, 1990).