Saturday, September 29, 2007

Physics is a Liberal Art!

In my introductory essay for this blog, I argued that physics is a liberal art. I'd like to spend a little time making a stronger case for that argument. It seems to have become commonplace for people to equate the liberal arts to the humanities (and perhaps even to only certain disciplines within the humanities). The seven traditional liberal arts were rhetoric, grammar, logic, geometry, arithmetic, music, and astronomy. Of these it is easy to associate rhetoric and grammar with, say, a major in English (though English professors would cringe at the idea that they primarily teach rhetoric and grammar - of course, their focus is on literary criticism). Similarly, one can associate logic with philosophy and music with the fine arts in general. So there is no doubt that there is a big overlap between the traditional liberal arts and the humanities. But what about geometry, arithmetic, and astronomy? I see little choice but to equate these with the modern study of mathematics and science. Granted, a modern mathematics major will spend little time studying geometry (and hopefully none studying arithmetic, including "college algebra", which they should already know), just as English majors don't spend much time on grammar. Still, there can be no doubt that mathematics and science were very much a part of the traditional liberal arts.

Of course, one can argue that the term simply means something different now. But what did the term mean in classical and medieval times? It referred to areas of knowledge that were appropriate for free men, as opposed to more applied areas of knowledge that might be appropriate for slaves or serfs. So if we take the term to mean the same thing today (knowledge appropriate for free persons), then what should the liberal arts be in today's context? I have no easy answer for that, but I am absolutely certain that science must be a part of it. A free person in modern society must have a basic understanding of the methods of science, and at least some rudimentary scientific content knowledge. Why? Because science and its by-products pervade every aspect of modern society. Science drives our economies and has helped produce a worldview that is conducive to the modern democratic state (i.e. with the concept of universal natural laws). Those without any knowledge of science in today's world are in a dangerous situation because they can be easily controlled and manipulated by those who do understand science (and often by those who don't - just look at some political rhetoric and advertisements for pharmaceuticals). I hope no one would argue with the notion that all free persons should know how to read, write, and perform basic mathematical manipulations. I would place science right after these on the list of things a free person should know.

Now, I don't mean that every free person needs to major in science at the college level. Hardly. But a free person should possess a basic understanding of the methods of science, and some ability to distinguish science from psuedoscience and from that which is simply not science. Unfortunately, students who take science courses at the college level are often given an "introduction to the discipline" that focuses on content rather than methodology. These courses might make it seem like the purpose of studying science is solely to become a scientist. This makes the sciences seem more like applied disciplines rather than intellectual disciplines appropriate for all free persons. I think science should be taught as the liberal art that it is, rather than as vocational training. This is imperative for non-science majors who may take only one or two science courses. I am becoming increasingly convinced that we should also teach courses for science majors this way, at least at the introductory level. If more advanced courses take on a more "vocational" or "professional" feel then that may be appropriate once students have an understanding of the basic methodology of science.

One last thought on why science is a liberal art: science is a liberal art because people pursue science for the same reason that people pursue other liberal arts. Most English majors do not study English because they sense it will land them a high-paying job one day. Most Fine Arts majors don't view their education as preparation for a lucrative career as a painter, etc. But neither do physics majors study physics because it will get them a good job. Most of us study physics for the same reason that people write poetry: because it brings us joy. Doing physics is fun (at least, it is for me). Physics, and the other sciences, are very intellectually stimulating. Now perhaps this can be said of anything. I've talked with some marketing professors who make the study of marketing sound enjoyable and intellectually stimulating. But most students who study marketing probably do so because they want to get a job in that field. Physics students don't tend to think that way. Many physics majors go on to grad school, but I think this is primarily because they enjoy studying physics and they want to keep doing so. Others get jobs straight away, more often that not outside the field of physics. And that's fine, because they didn't major in physics as preparation for a specific job. They majored in physics because it challenged their mind, deepened their reasoning skills, improved their understanding of nature, and honed their mathematical ability. I believe the same is true for other liberal arts like English or History. They don't really serve to prepare you for a specific career (unless you want to teach), but they provide you with a set of intellectual skills that can enrich your life and make you capable of meeting almost any challenge.

I'd like to see the sciences receive recognition as liberal arts. I think the Humanities folks need to acknowledge science's rightful place among the liberal arts. I likewise think that quite a few scientists need to stop scoffing at the liberal arts and start recognizing that their own subject is as much a liberal art as History or Philosophy. That doesn't mean we can't continue to recognize certain boundaries between disciplines. Joyce's Ulysses is not science, any more than Einstein's "On the Electrodynamics of Moving Bodies" is literature. But both should be recognized as the great intellectual achievements they are. How much poorer we would be if we had only science, or only literature, but not both!

Saturday, September 15, 2007

The Scope of a Scientific Theory

In this essay I want to follow up my discussion of Lakatos' conception of scientific research programmes by describing Thomas Brody's conception of the scope of scientific theories. Brody's views, as set forth in his The Philosophy Behind Physics (a book which he did not complete before his death, and which includes some essays written by him that were never intended for the book), seem to have been largely ignored by the philosophy of science community. It may be that his ideas are fundamentally flawed, and have been ignored for good reason. I'm not enough of an expert to judge that. However, I find his concept of scope quite compelling. In particular, it seems to me that Brody's approach can be viewed as another way of keeping Popper's basic approach to scientific methodology while simultaneously addressing some of the problems that beset Popper's views (see my previous essay for a brief discussion of some of these problems).

Brody's understanding of scientific progress sees the evaluation of scientific theories as divided into two stages. In the first stage, a nascent theory gains support by accumulating confirming evidence (corroboration, in Popper's terminology). A newly proposed theory that fails to be supported by empirical evidence will likely be discarded. However, once a theory has survived this early stage it moves on to a phase in which the focus is on trying to find situations in which the theory fails (just as in Popper's falsification approach). The purpose of finding these failures, though, is not to falsify the theory in anything like an absolute sense. The theory will not be discarded simply because a few failures occur. Rather, these failures of the theory are used to delimit the theory's scope.

The scope of a theory, as Brody presents it, is something like the set of circumstances in which the theory will produce successful predictions. This definition, though, is probably too vague. If the successes and failures of the theory follow no apparent pattern then it is probably impossible to define the scope of that theory. But a theory's scope can become well defined if we can translate the "circumstances" in which the theory is used into something like a parameter space. If we then find that the theory produces successful predictions for some region of this parameter space, but fails to produce successful predictions outside this region, then the region of success effectively defined the scope of the theory. Brody seems to assume in his writing that we should expect theories to behave in this way. He does not address pathological cases in which the points in parameter space at which the theory is successful are intimately intermixed with the points at which the theory fails. I think this is because his concept of scope is largely derived from his view that all theories are approximations (see my essay on approximations in physics for my take on this). Mathematical approximations (such as truncating a Taylor series, for example) are generally valid for some range of values for the relevant variables and invalid outside of that range. Brody seems to think the scope of a theory can be determined in much the same way.

Now I find this idea compelling because it avoids the assumption that I have claimed lies at the heart of naive falsificationism, namely that there is one "true" theory that is capable of predicting everything in the Universe. Brody's sees scientific theories as human inventions, inventions that can at best approximate reality. Science, from this point of view, is not about pursuing absolute truth but about finding approximate truths and understanding when those approximate truths hold and when they do not. Brody finds it perfectly acceptable to have one theory that describes a phenomenon in a certain parameter range and another logically incompatible theory that describes the same phenomenon in a different parameter range. He discusses the various models used in nuclear physics in this context. It is possible that one could even view wave-particle duality (of photons, electrons, etc.) in this way, although it is not clear how one could define parameters such that wave behavior manifests for one range of parameter values and particle behavior for a different range.

Another reason I find Brody's idea compelling is that it seems to reflect some important parts of scientific history. When a well-established theory is falsified, historically scientists have not tossed the theory aside and moved on to something else. Certainly, there is a desire to formulate a new theory that will work where the previous theory failed. But quite often the "falsified" theory is kept on. If one can clearly determine the scope of the theory then it is rational to continue using the old theory in those situations which fall within its scope. Classical Newtonian mechanics is an excellent example of this. Newtonian mechanics has been falsified over and over, yet we have not tossed it aside (I teach a two-semester sequence on intermediate classical mechanics and I don't think I am wasting my student's time!). We still use Newtonian mechanics in those situations where we are confident that it will work. There may be a sense in which physicists are convinced that quantum mechanics and relativity are "truer" than Newtonian mechanics, and that we are only still willing to use Newtonian mechanics because it accurately approximates those "truer" theories in certain situations. But in the case of quantum mechanics, showing that the "truer" theory reduces to Newtonian mechanics in the appropriate circumstances has proved to be challenging (particularly in the case of chaotic systems). The same may be true for general relativity, although I know much less about that case. I think Brody would claim that we need not worry so much about this issue. As long as we know when it is okay to use Newtonian mechanics, then it is fine for us to do so. We don't have to convince ourselves that we are really using quantum mechanics and general relativity in an approximate form.

Now I think that the concept of scope helps resolves some of the problems associated with naive falsificationism, but it certainly doesn't settle all of them. In particular, it seems to suffer at the hands of the Duhem-Quine thesis. If a theory fails to predict the results of an experiment, how can we be sure that the experiment is outside the theory's scope? It could be that the failure is due to an auxiliary hypothesis (thus indicating that the experiment is outside the scope of that auxiliary hypothesis). So just as we can never falsify a theory in isolation, we can never determine the scope of a theory in isolation. We can only determine the scope of an entire network of theories that are used to predict the results of an experiment (and to interpret the experimental data). Another way to state this is that when we test a theory we must inevitably assume the validity of several other theories in the process. This assumption may prove to be correct, or it may not. Whenever we get a negative result, it could be a failure of the theory we are testing or it could be a failure of the assumptions we have made. This makes determining the scope of a theory a complicated process. In practice we must evaluate many theories at once and any failure signifies that we are outside the scope of at least one of the theories. Delimiting the scope of a set of theories thus becomes and endless process of cross-checking. So Brody's view faces some serious challenges - but I think it deserves more attention than it has received.

I'd like to close this essay by trying to tease out some similarities between the approaches of Lakatos and Brody. Both seem to build on Popper's basic premise. Both avoid inductivist ideas. Both attempt to defend the rationality of science (contra Kuhn and Feyerabend, etc.). I think one could even reformulate Lakatos' ideas using Brody's language. When we perform an empirical test of a theory we are really testing a whole network of theories and assumptions. However, based on the details of the experiment we may have different levels of confidence in the various theories that compose the network. We may be very confident that we are well within the scope of many of these theories/assumptions, and therefore we would be very unlikely to blame any failure on these parts of the network. The theories or assumptions in this group would form the "hard core" in Lakatos' terminology. On the other hand, we may be less certain about the where the experiment falls in relation to the scope of other theories and assumptions in the network. We would be much more likely to blame a failed prediction on one of these theories/assumptions. This group of theories and assumptions then forms the "protective belt". This represents a significant change in Lakatos' conception (at least, as far as I understand it) because now theories could move between the hard core and the protective belt depending on the context of the experiment. I think this is a step in the right direction because it provides some much-needed flexibility. In particular, it opens up the door for falsifying (or at least delimiting the scope of) those theories which are part of the hard core. If a theory that is in the hard core is always in the hard core then it would seem to be unfalsifiable, and thus it would become a metaphysical principle or a convention rather than a physical theory. Yet, this idea does allow for the possibility that some theories (or principles, or whatever) could have universal scope and could therefore be "permanent members" of the hard core.

I have actually used Brody's concept of scope in teaching students about the nature of science. I have them perform an experiment to determine the relation between the period of a pendulum's oscillations and it's length. They consider two mathematical models: one in which the period is a linear function of length, and one in which the square of the period is a linear function of length. They generally find that both models work well to fit their data. They then use each model to predict the period of a 16-meter pendulum, and then they actually measure the period of such a pendulum (we have a 16-m Foucault pendulum in our lobby). They find that the second model's prediction is reasonably close, while the first model is way off. We could consider this a falsification of the first model, but I try to lead them toward a different conclusion: that we have really just shown that long pendulums lie outside the scope of the first model. In fact, if we made the pendulum VERY long (say, a significant fraction of Earth's radius) then we would find the second model would fail as well. The basic idea is that all models have a finite scope, so a failure of a model doesn't mean we should discard it or else we would discard everything. However, in evaluating between two models we may find that the scope of one completely encloses but extends beyond the scope of the other. In that case we would clearly prefer the model that has the wider scope. On the other hand, if the two models had scopes that overlapped only partially or else did not overlap at all then it would be quite reasonable to keep both models around so that we can use the model which is most appropriate for the prediction we are trying to make.

Saturday, September 8, 2007

A First Response to Naive Falsificationism

I've just finished reading Karl Popper's Logic of Scientific Discovery (including all of the "starred" appendices), so now is as good a time as any to write about two responses to Popper's ideas that have been on my mind lately. Let me start by giving a caricature of Popper's doctrine of falsificationism. The basic idea is that scientific theories are bold conjectures about the physical world, which must then be subject to stringent empirical tests. Any theory which fails such a test must be discarded, and a new bold conjecture must be put forth. This notion of the methodology of science avoids many of the pitfalls of inductivist thinking (such as the verifcationist approach of the logical postivists), but it suffers from its own set of problems. I'll list some of the more glaring ones here. First of all, it would be unwise to throw out a theory which has failed an empirical test unless there is a comparably good theory available that has yet to fail any empirical tests. In other words, just because a theory is "wrong" doesn't mean it is not better than nothing. Secondly, any empirical test will test not just a single theory but rather a whole set of theories (which may or may not be related to each other) along with various other assumptions and approximations that must be used to relate the raw empirical data to the prediction made by the theory. If the prediction fails to match the empirical data, it is not at all clear that this must be the fault of the theory that is supposedly being tested. Finally (for this essay - this is by no means a comprehensive list of problems with Popper's doctrine), naive falsificationism contradicts the history of science. Every theory that we hold today has been "falsified" at some point, and it seems likely that any future theory will suffer the same fate. Clearly we are not willing to throw out all of our current science because a few contradictions have been found.

One thing that was interesting for me in reading Logic of Scientific Discovery is that Popper seems to have been quite aware of most, if not all, of these issues. In fact, he really says very little about falsification itself. His main concern is to define falsifiability, and to show how the falsifiability of a theory can be used as a criterion for demarcation between science and pseudoscience. It seems clear from reading the book that Popper understood falsification was not a simple matter. Regardless of what Popper thought, I'd like to analyze a basic assumption that I think would have to underly the naive falsificationist view (if there was anyone who actually held this view). The assumption is this: that there is a universally correct theory, a theory which would deliver predictions that would be correct in all circumstances. If this were true, then any theory which made a prediction that failed would clearly not be that universally correct theory and could thus rationally be discarded in hopes of reaching that "final theory." (Of course, this assumes that our empirical test was valid and that all the information that was used, along with our theory, to make the predictions was accurate.) I have argued in a previous essay that I believe such a final theory is impossible. All theories are, I believe, approximations and all approximations are valid only in certain situations.

I would like to discuss two attempts to remedy some of the problems with naive falsificationism. (I'm sure there have been attempts other than these, and certainly many philosophers of science have been quite content to discard Popper's ideas altogether, but I'm going to focus on these two attempts to retain Popper's basic approach.) The first approach I want to discuss is well known: Imre Lakatos' "methodology of scientific research programmes." The second is, I think, less well known: Thomas Brody's concept of the "scope" of scientific theories. Actually, I am personally more familiar with Brody's ideas because I have read his work, while I have only read descriptions of Lakatos' ideas written by others (I'll fix that soon, I hope - I have at least a few of Lakatos' key papers in my library). I'll describe Lakatos' idea in the remainder of this essay. In my next essay I'll describe Brody's approach and try to analyze the similarities and differences between these two views.

Lakatos conceives of research programmes as conglomerations of theories and assumptions. As noted above, any empirical test cannot test a single theory but rather tests a whole group of theories (this is often referred to as the "Duhem-Quine thesis"). Lakatos realized that in the face of this problem scientists must make a decision about which piece of the collection of theories and assumptions that are relevant to the empirical test gets falsified and which pieces are still considered valid. Lakatos proposes that each research programme has a "hard core" that is considered almost sacred. The hard core consists of the fundamental theories that scientists will simply refuse to abandon, even in the face of an apparent falsification. Surrounding this hard core of theories is the "protective belt." The protective belt consists of lesser theories and a wide variety of assumptions to which scientists feel less compelled to cling. If a prediction fails to accord with the empirical test then this failure will be blamed on some element of the protective belt rather than on one of the theories that comprises the hard core. Thus the protective belt serves to shield the hard core from falsification. (Note that this is exactly contrary to what Popper says we ought to do with our theories, since he claims we should NEVER shield them from falsification but rather subject them to the most extreme tests.) This is somewhat similar to Poincare's distinction between principles and laws. Principles, for Poincare, are true by convention and cannot be subject to empirical tests, while laws are empirically testable.

Lakatos' approach does appear to solve the problems with naive falsification that were mentioned above. It explains why, for example, we did not abandon Newtonian mechanics and gravitation when the predicted location of the planet Uranus was found to disagree with observations. Newton's theories were part of the hard core (at the time), while our knowledge of the objects that populate the solar system was part of the protective belt. Scientists were more willing to postulate an unknown planet than they were to modify or discard Newtonian theory. This turned out to be a wise choice, and the discovery of Neptune soon followed. Nevertheless, there are some problems with Lakatos' ideas as well. It seem very problematic that we should protect our theories from falsification. Indeed, how can theories that reside in the hard core ever be found false (if indeed they are so). I suppose one can claim that repeated falsifications, even when the protective belt has been modified to account for past problems, might begin to weaken the hard core and eventually some portions of the hard core might "slip" into the protective belt. Perhaps the rise of quantum mechanics can be viewed in this way, although this view does not seem to apply to the development of relativity (Michelson and Morley provided the repeated falsification, but it seems that Einstein did not even consider this in developing his special theory).

There is one issue I have wondered about, and which Lakatos may have addressed although I am unaware of it. This issue is this: must the hard core and protective belt be fixed at any given time. Or can theories and assumptions move from the hard core to the protective belt, and vice versa, depending on what kind of prediction we are making. I discussed in my previous essay how the validity of approximations depends on the use to which they are put. This seems relevant to the idea of defining a hard core as well. In certain contexts we might feel that a theory is to be regarded as unquestionably true, while in another context we might consider the theory subject to falsification. One could look at Newtonian mechanics that way. I think any physicist would be unwilling to accept that a cannonball moved in a way that contradicted Newtonian mechanics. On the other hand, the same physicist would quite willingly admit that Newtonian mechanics is falsified by the motion of an electron. Now if one takes the view that science should strive for a final theory that works in all possible contexts, then this idea wouldn't make much sense. If falsification in any context implies total falsification (i.e. we declare the theory "wrong" and discard it) then it would not make sense to consider a theory falsifiable in certain situations but not in others. However, if we accept that scientific theories must always be approximate and provisional then one might be willing to move a theory from the hard core to the protective belt (or vice versa) based on the context of the prediction.

I'll leave my discussion of Lakatos' research programmes for now, because I think Brody's concept of scope can illuminate some of these issues. I'll tackle that in my next essay.

Sunday, September 2, 2007

Approximation in Physics

Science is all about approximations. The importance of approximation is perhaps most clear in the discipline of physics, which deals with simpler systems than does chemistry or biology. In any case, physics is my discipline so I will focus mainly on approximation in physics. Certainly it is in physics that it is easiest to quantify our approximations, and this may make the role of approximation stand out more clearly in physics than it does in other sciences. Nevertheless, approximation is an important, indeed a fundamental, part of any science.

Let's start by distinguishing two ways that approximation shows up in science (for now I'll focus on physics). One way to make use of approximation is within a model. When I use the word model I mean something a bit broader than what is meant when we talk about "the Bohr model of the atom." I would like to use the term "model" in something like the way that it is used in "the Standard Model of Particle Physics", although without any assumption that the model represents anything fundamental. (For that matter, I have doubts that the Standard Model represents anything fundamental, but that's another story - sort of.) I can make this clearer by describing a specific example. Let's take a model of the solar system. We might posit that each object (planet, asteroid, Sun, etc.) can be represented by a mathematical point in a three-dimensional (Euclidean) space. Each object is also assigned a mass. We then state that these point masses move in such a way that they obey Newton's three Laws of Motion combined with Newton's Universal Law of Gravitation. Now if we specify the initial conditions for each point mass (position and velocity) as some time t=0, then we can determine the state of our model at any future time. In essence, by "model" I mean a description of a system that includes a list of the basic entities of which the system is composed along with the relevant properties of those entities, as well as any rules or laws that dictate the behavior of those entities. I'll leave it open whether or not the initial conditions constitute part of the model.

Now when I talk about using approximations within a model I mean finding the approximate behavior of the entities in the model, rather than the exact behavior according to the rules inherent in the model. For example, in our solar system model we could decide to ignore the gravitational effects of small bodies like asteroids and instead focus only on the gravitational pull of the planets (and perhaps a few larger moons). Note that this approximation can actually take two forms. One way to approach this approximation is to cut the small bodies out of the model entirely. In other words, let's just pretend for the moment that there are no asteroids or small moons in the solar system. This gives us a new, simpler model in which the list of basic entities is different from that of our original model. An entirely different approach to this approximation would be to keep the asteroids and small moons, but ignore the gravitational pull of these objects on the planets, etc. (but not the pull of the planets on the asteroids and small moons). This gives us a new model that has the same list of basic entities, but which follows a different set of laws. In particular, violations of Newton's Third Law of Motion (that objects exert equal magnitude forces on each other in opposite directions) and Newton's Universal Law of Gravitation (that all massive bodies exert gravitational forces on all other massive bodies) abound in this model. There is a sense in which this version of our approximation actually leads to a much more complicated model (sometimes the Third Law is satisfied, sometimes not), but in practice it turns out to be easier to make quantitative predictions in this model than in the original one.

Note that which of these two approaches you take will depend heavily on what you wish to get from your model. For example, if you are trying to predict the motion of Mercury either version of this approximation might do the job nicely but the first version will probably be easier to use. On the other hand, if you are trying to predict possible asteroid impacts on Earth then clearly the first version of the approximation will not do the trick (I can tell you what THAT version will predict right now!). This raises a critical point about how approximations are used in science. The validity of an approximation cannot generally be evaluated on its own. Rather, it can only be evaluated in light of the goals of the scientist. For some purposes the first version of our approximation might be perfectly valid, but for other purposes it will be completely useless. This point may not seem all that striking when we are discussing approximation within a model, but it will also be important for our discussion of the second way that approximation is used in science.

The second way that approximation is used is that our model itself is an approximation of a real physical system (or at least a possible physical system - as a computational physicist I often employ models that likely don't correspond to anything that currently exists in the real world, but give the atom optics people enough time and they'll probably create one). Our solar system model is presumably meant to be a model of the actual planets and other bodies that orbit our Sun. But even the original version of our model was very much an approximation. To begin with, we ignore every property of the objects in the solar system other than their position (and associated time derivatives) and mass. We ignore their chemical composition, their temperature, their albedo, their rotational angular momentum, and on, and on. Furthermore, we ignore the fact that there is a lot of stuff outside the solar system that could, at least in principle, influence the motion of the objects within the solar system. These approximations amount to making cutoffs in our list of basic entities and cutoffs in our list of the relevant properties of those entities. But that's not where the approximations stop. We also make approximations in regard to the laws or rules of our model. Our solar system model ignores all forces other than gravitation. We also chose to use Newton's Laws of Motion, which we know are only approximately correct. Indeed, we completely ignore quantum effects (perhaps justifiably) as well as relativistic effects (that's harder to justify - after all, we know we won't get Mercury's orbit right if we don't use General Relativity).

Now after thinking hard about all the approximations that are intrinsic to our model of the solar system, we might begin to question whether the model is actually good for anything. It seemed pretty good at first (after all, we were prepared to keep track of a point mass to represent every object in the solar system). But now it seems like a joke. How can we convince ourselves that it is not? In other words, how can we convince ourselves that this model is valid? Well, we saw in that in the case of approximations made within a model the question of validity hinged on our purpose in using the model. The same is true here. Suppose we want to predict what the Moon will look like from Earth on a given night in the future. Well, if all we want to know is the phase of the Moon then our model may do the trick since it can predict (approximately) the location of the Moon relative to Earth and the Sun. If, on the other hand, we want to be able to say something about how bright the Moon will appear then we are lost since that will depend on the nuclear reactions that produce the Sun's luminosity, the albedo of the lunar surface, as well as local atmospheric effects on Earth. None of that is in our model, so we have no hope. So in some cases we can show that the model is invalid for a particular purpose without even making a prediction. For example, if we want to predict properties of objects that are not included in the model (either the properties or the objects or both) then the model will not be valid for that purpose. But if we cannot invalidate the model that way, the best way to test it is to make predictions with the model and see how they stand up against the actual behavior of the physical system we are trying to model. If we wanted to predict Earth's orbit around the Sun for the next one hundred years we would probably find that our original solar system model would do a bang-up job.

There may be other ways to validate our approximations, particularly if they can be quantified. For example, we can justify ignoring quantum effects in our solar system model because they simply won't show any noticeable effect at the scale we are examining. We can justify ignoring the gravitational forces exerted on the planets by stars other than the Sun because those forces are negligibly small. Again, though, whether or not these approximations are acceptable will depend on our purpose for using the model (just how precisely do you want to predict Earth's orbit?).

Now we're getting close to the main point I want to make. I hope it is clear that approximations pervade everything we do in physics. Approximations are such a fundamental part of doing physics that physicists tend to forget they are there. But all of physics is an approximation. It is simply not possible to do any physics without making some approximations. We must truncate the list of basic entities in our model (otherwise we would have to include everything in the Universe - and I don't just mean the visible Universe). We must truncate the list of properties that the basic entities possess (otherwise we must consider all possible properties - and who can say what unknown properties might be possessed by some object in the Universe?). Ideally we will make use of our best physical laws, but even these are almost certainly approximations.

I probably wouldn't get much argument from other physicists about what I've said so far, but now I'd like to claim something more controversial. I am convinced that understanding the role of approximation in physics makes it impossible to believe in a "final theory" (or at least impossible to believe that we will ever find the final theory). If we are willing to acknowledge that everything we now call physics is, at some level, an approximation then we must accept that any new theories must potentially be approximate as well. How would we be able to determine that a new theory was not an approximation, that it was in fact the "final" theory? Presumably we would use the theory (or model) to make predictions and then see if those predictions match our observations or experimental results (and I'll pretend for the moment that we can do this in some obvious way and actually know whether or not the predictions match the observations - reality is much trickier than this). But even if the match was perfect that would only validate the theory for that prediction (or at best for some class of predictions). We can only test the validity of an approximation in the context of some particular purpose. To show that a theory is universally valid (i.e that it is a final theory) we would have to show that it is valid for all possible purposes. I simply don't think that it is possible to do this. I'm really not even sure what it would mean to make the claim that a theory is valid for all possible purposes.

So I don't believe in a final theory. I believe that all of physics as it currently exists is an approximation, although an astoundingly good approximation for most of the purposes for which we have used it. I believe that all future physics will still be an approximation, although it will likely be a better approximation for some of our old purposes and a valid approximation for some purposes of which we have yet to conceive. But physics will always have a limited scope (I'll save a detailed discussion of the idea of scope for a later essay - or just read Thomas Brody). The only argument I can see against this point of view is that if we find a theory that works well for all purposes that we can think of now, then it will probably work well for all purposes we devise in the future. But evaluating a theory based on its "logical probability" does not work well (I'll try to write about this later, but Karl Popper has already made the point pretty well). I am convinced that any "Dreams of a Final Theory" will remain just that (with or without the Superconducting Supercollider).

P.S. Yes, I've read the book by Stephen Weinberg whose title I quote above. No, I am not trying to criticize that book or Weinberg himself. In fact, I had two semesters of quantum field theory from Weinberg when I was a grad student. The man clearly knows a lot more about physics than I do, and he has thought a lot more about philosophy than the vast majority of physicists. Even if, as a pond minnow, I wished to pick a fight with one of the great Leviathans of the sea, I would not choose to pick a fight with Weinberg!

Why Noninertial Frame?

I'll try to answer two versions of the above question. First let me answer this version: why have I created this blog (which happens to be titled "Noninertial Frame")? My reason is fairly simple. I am a professional physicist and a professor at a liberal arts college. Both my undergraduate and doctoral degrees are from research universities, but somewhere along the way I picked up the liberal-arts mindset. I have a deep love for the liberal arts, including physics. Yes, physics is one of the liberal arts. It is part of the Quadrivium (which consists of arithmetic, geometry, music, and astronomy or cosmology - I'm counting this last as including physics). Although physics is my first academic love, it is not my only one. I am particularly interested in philosophy (part of the Trivium which includes grammar, rhetoric, and logic), primarily in philosophy of science. As I have no professional training or expertise in any area other than physics (and a bit of math), I do not expect to engage in philosophy (or any other discipline except physics) in a professional capacity. But I love thinking about it and I love writing about it (and the writing helps me clarify my thinking). What better forum for unprofessional writing than a blog? With a blog I may actually find a reader or two who is interested in my thoughts, and I'll always know where to find that essay I wrote a few years back...

Now for the second version of the above question: why have I chosen to name my blog Noninertial Frame? Well, the term is used in physics for a reference frame that does not obey Newton's Laws of Motion (specifically the Law of Inertia). Generally this means the reference frame is accelerating with respect to frames which do obey Newton's Laws. Why is this apropos for my blog? First of all, it's a physics term and I am a physicist. I expect everything I write to be connected to physics, if not entirely about physics. Second, I am writing from the perspective of a liberal arts physicist for whom the usual rules for physicists (focus on research, don't muck about with philosophy, etc.) don't seem to apply. Finally, I'm hoping that this blog will help me maintain my interest in the connections between physics and the other liberal arts, so that I don't settle into the inertia of a routine career in physics teaching and research.

Now, what can you expect to find here. Mostly I will post essays (occasionally lengthy ones, I would guess) about issues related to physics but not strictly within the domain of physics per se. These essays will mostly be philosophical, but I will likely work in some mathematics and maybe a few discussions of literature and music (to round out the seven liberal arts). I'll keep it all tied to physics, though, because if it has nothing to do with physics then there is probably no reason for anyone to read what I write.

I hope this blog will be useful for someone other than me, but if not then I can live with that. Cheers!