I've just finished reading Karl Popper's Logic of Scientific Discovery (including all of the "starred" appendices), so now is as good a time as any to write about two responses to Popper's ideas that have been on my mind lately. Let me start by giving a caricature of Popper's doctrine of falsificationism. The basic idea is that scientific theories are bold conjectures about the physical world, which must then be subject to stringent empirical tests. Any theory which fails such a test must be discarded, and a new bold conjecture must be put forth. This notion of the methodology of science avoids many of the pitfalls of inductivist thinking (such as the verifcationist approach of the logical postivists), but it suffers from its own set of problems. I'll list some of the more glaring ones here. First of all, it would be unwise to throw out a theory which has failed an empirical test unless there is a comparably good theory available that has yet to fail any empirical tests. In other words, just because a theory is "wrong" doesn't mean it is not better than nothing. Secondly, any empirical test will test not just a single theory but rather a whole set of theories (which may or may not be related to each other) along with various other assumptions and approximations that must be used to relate the raw empirical data to the prediction made by the theory. If the prediction fails to match the empirical data, it is not at all clear that this must be the fault of the theory that is supposedly being tested. Finally (for this essay - this is by no means a comprehensive list of problems with Popper's doctrine), naive falsificationism contradicts the history of science. Every theory that we hold today has been "falsified" at some point, and it seems likely that any future theory will suffer the same fate. Clearly we are not willing to throw out all of our current science because a few contradictions have been found.
One thing that was interesting for me in reading Logic of Scientific Discovery is that Popper seems to have been quite aware of most, if not all, of these issues. In fact, he really says very little about falsification itself. His main concern is to define falsifiability, and to show how the falsifiability of a theory can be used as a criterion for demarcation between science and pseudoscience. It seems clear from reading the book that Popper understood falsification was not a simple matter. Regardless of what Popper thought, I'd like to analyze a basic assumption that I think would have to underly the naive falsificationist view (if there was anyone who actually held this view). The assumption is this: that there is a universally correct theory, a theory which would deliver predictions that would be correct in all circumstances. If this were true, then any theory which made a prediction that failed would clearly not be that universally correct theory and could thus rationally be discarded in hopes of reaching that "final theory." (Of course, this assumes that our empirical test was valid and that all the information that was used, along with our theory, to make the predictions was accurate.) I have argued in a previous essay that I believe such a final theory is impossible. All theories are, I believe, approximations and all approximations are valid only in certain situations.
I would like to discuss two attempts to remedy some of the problems with naive falsificationism. (I'm sure there have been attempts other than these, and certainly many philosophers of science have been quite content to discard Popper's ideas altogether, but I'm going to focus on these two attempts to retain Popper's basic approach.) The first approach I want to discuss is well known: Imre Lakatos' "methodology of scientific research programmes." The second is, I think, less well known: Thomas Brody's concept of the "scope" of scientific theories. Actually, I am personally more familiar with Brody's ideas because I have read his work, while I have only read descriptions of Lakatos' ideas written by others (I'll fix that soon, I hope - I have at least a few of Lakatos' key papers in my library). I'll describe Lakatos' idea in the remainder of this essay. In my next essay I'll describe Brody's approach and try to analyze the similarities and differences between these two views.
Lakatos conceives of research programmes as conglomerations of theories and assumptions. As noted above, any empirical test cannot test a single theory but rather tests a whole group of theories (this is often referred to as the "Duhem-Quine thesis"). Lakatos realized that in the face of this problem scientists must make a decision about which piece of the collection of theories and assumptions that are relevant to the empirical test gets falsified and which pieces are still considered valid. Lakatos proposes that each research programme has a "hard core" that is considered almost sacred. The hard core consists of the fundamental theories that scientists will simply refuse to abandon, even in the face of an apparent falsification. Surrounding this hard core of theories is the "protective belt." The protective belt consists of lesser theories and a wide variety of assumptions to which scientists feel less compelled to cling. If a prediction fails to accord with the empirical test then this failure will be blamed on some element of the protective belt rather than on one of the theories that comprises the hard core. Thus the protective belt serves to shield the hard core from falsification. (Note that this is exactly contrary to what Popper says we ought to do with our theories, since he claims we should NEVER shield them from falsification but rather subject them to the most extreme tests.) This is somewhat similar to Poincare's distinction between principles and laws. Principles, for Poincare, are true by convention and cannot be subject to empirical tests, while laws are empirically testable.
Lakatos' approach does appear to solve the problems with naive falsification that were mentioned above. It explains why, for example, we did not abandon Newtonian mechanics and gravitation when the predicted location of the planet Uranus was found to disagree with observations. Newton's theories were part of the hard core (at the time), while our knowledge of the objects that populate the solar system was part of the protective belt. Scientists were more willing to postulate an unknown planet than they were to modify or discard Newtonian theory. This turned out to be a wise choice, and the discovery of Neptune soon followed. Nevertheless, there are some problems with Lakatos' ideas as well. It seem very problematic that we should protect our theories from falsification. Indeed, how can theories that reside in the hard core ever be found false (if indeed they are so). I suppose one can claim that repeated falsifications, even when the protective belt has been modified to account for past problems, might begin to weaken the hard core and eventually some portions of the hard core might "slip" into the protective belt. Perhaps the rise of quantum mechanics can be viewed in this way, although this view does not seem to apply to the development of relativity (Michelson and Morley provided the repeated falsification, but it seems that Einstein did not even consider this in developing his special theory).
There is one issue I have wondered about, and which Lakatos may have addressed although I am unaware of it. This issue is this: must the hard core and protective belt be fixed at any given time. Or can theories and assumptions move from the hard core to the protective belt, and vice versa, depending on what kind of prediction we are making. I discussed in my previous essay how the validity of approximations depends on the use to which they are put. This seems relevant to the idea of defining a hard core as well. In certain contexts we might feel that a theory is to be regarded as unquestionably true, while in another context we might consider the theory subject to falsification. One could look at Newtonian mechanics that way. I think any physicist would be unwilling to accept that a cannonball moved in a way that contradicted Newtonian mechanics. On the other hand, the same physicist would quite willingly admit that Newtonian mechanics is falsified by the motion of an electron. Now if one takes the view that science should strive for a final theory that works in all possible contexts, then this idea wouldn't make much sense. If falsification in any context implies total falsification (i.e. we declare the theory "wrong" and discard it) then it would not make sense to consider a theory falsifiable in certain situations but not in others. However, if we accept that scientific theories must always be approximate and provisional then one might be willing to move a theory from the hard core to the protective belt (or vice versa) based on the context of the prediction.
I'll leave my discussion of Lakatos' research programmes for now, because I think Brody's concept of scope can illuminate some of these issues. I'll tackle that in my next essay.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment