Science is all about approximations. The importance of approximation is perhaps most clear in the discipline of physics, which deals with simpler systems than does chemistry or biology. In any case, physics is my discipline so I will focus mainly on approximation in physics. Certainly it is in physics that it is easiest to quantify our approximations, and this may make the role of approximation stand out more clearly in physics than it does in other sciences. Nevertheless, approximation is an important, indeed a fundamental, part of any science.
Let's start by distinguishing two ways that approximation shows up in science (for now I'll focus on physics). One way to make use of approximation is within a model. When I use the word model I mean something a bit broader than what is meant when we talk about "the Bohr model of the atom." I would like to use the term "model" in something like the way that it is used in "the Standard Model of Particle Physics", although without any assumption that the model represents anything fundamental. (For that matter, I have doubts that the Standard Model represents anything fundamental, but that's another story - sort of.) I can make this clearer by describing a specific example. Let's take a model of the solar system. We might posit that each object (planet, asteroid, Sun, etc.) can be represented by a mathematical point in a three-dimensional (Euclidean) space. Each object is also assigned a mass. We then state that these point masses move in such a way that they obey Newton's three Laws of Motion combined with Newton's Universal Law of Gravitation. Now if we specify the initial conditions for each point mass (position and velocity) as some time t=0, then we can determine the state of our model at any future time. In essence, by "model" I mean a description of a system that includes a list of the basic entities of which the system is composed along with the relevant properties of those entities, as well as any rules or laws that dictate the behavior of those entities. I'll leave it open whether or not the initial conditions constitute part of the model.
Now when I talk about using approximations within a model I mean finding the approximate behavior of the entities in the model, rather than the exact behavior according to the rules inherent in the model. For example, in our solar system model we could decide to ignore the gravitational effects of small bodies like asteroids and instead focus only on the gravitational pull of the planets (and perhaps a few larger moons). Note that this approximation can actually take two forms. One way to approach this approximation is to cut the small bodies out of the model entirely. In other words, let's just pretend for the moment that there are no asteroids or small moons in the solar system. This gives us a new, simpler model in which the list of basic entities is different from that of our original model. An entirely different approach to this approximation would be to keep the asteroids and small moons, but ignore the gravitational pull of these objects on the planets, etc. (but not the pull of the planets on the asteroids and small moons). This gives us a new model that has the same list of basic entities, but which follows a different set of laws. In particular, violations of Newton's Third Law of Motion (that objects exert equal magnitude forces on each other in opposite directions) and Newton's Universal Law of Gravitation (that all massive bodies exert gravitational forces on all other massive bodies) abound in this model. There is a sense in which this version of our approximation actually leads to a much more complicated model (sometimes the Third Law is satisfied, sometimes not), but in practice it turns out to be easier to make quantitative predictions in this model than in the original one.
Note that which of these two approaches you take will depend heavily on what you wish to get from your model. For example, if you are trying to predict the motion of Mercury either version of this approximation might do the job nicely but the first version will probably be easier to use. On the other hand, if you are trying to predict possible asteroid impacts on Earth then clearly the first version of the approximation will not do the trick (I can tell you what THAT version will predict right now!). This raises a critical point about how approximations are used in science. The validity of an approximation cannot generally be evaluated on its own. Rather, it can only be evaluated in light of the goals of the scientist. For some purposes the first version of our approximation might be perfectly valid, but for other purposes it will be completely useless. This point may not seem all that striking when we are discussing approximation within a model, but it will also be important for our discussion of the second way that approximation is used in science.
The second way that approximation is used is that our model itself is an approximation of a real physical system (or at least a possible physical system - as a computational physicist I often employ models that likely don't correspond to anything that currently exists in the real world, but give the atom optics people enough time and they'll probably create one). Our solar system model is presumably meant to be a model of the actual planets and other bodies that orbit our Sun. But even the original version of our model was very much an approximation. To begin with, we ignore every property of the objects in the solar system other than their position (and associated time derivatives) and mass. We ignore their chemical composition, their temperature, their albedo, their rotational angular momentum, and on, and on. Furthermore, we ignore the fact that there is a lot of stuff outside the solar system that could, at least in principle, influence the motion of the objects within the solar system. These approximations amount to making cutoffs in our list of basic entities and cutoffs in our list of the relevant properties of those entities. But that's not where the approximations stop. We also make approximations in regard to the laws or rules of our model. Our solar system model ignores all forces other than gravitation. We also chose to use Newton's Laws of Motion, which we know are only approximately correct. Indeed, we completely ignore quantum effects (perhaps justifiably) as well as relativistic effects (that's harder to justify - after all, we know we won't get Mercury's orbit right if we don't use General Relativity).
Now after thinking hard about all the approximations that are intrinsic to our model of the solar system, we might begin to question whether the model is actually good for anything. It seemed pretty good at first (after all, we were prepared to keep track of a point mass to represent every object in the solar system). But now it seems like a joke. How can we convince ourselves that it is not? In other words, how can we convince ourselves that this model is valid? Well, we saw in that in the case of approximations made within a model the question of validity hinged on our purpose in using the model. The same is true here. Suppose we want to predict what the Moon will look like from Earth on a given night in the future. Well, if all we want to know is the phase of the Moon then our model may do the trick since it can predict (approximately) the location of the Moon relative to Earth and the Sun. If, on the other hand, we want to be able to say something about how bright the Moon will appear then we are lost since that will depend on the nuclear reactions that produce the Sun's luminosity, the albedo of the lunar surface, as well as local atmospheric effects on Earth. None of that is in our model, so we have no hope. So in some cases we can show that the model is invalid for a particular purpose without even making a prediction. For example, if we want to predict properties of objects that are not included in the model (either the properties or the objects or both) then the model will not be valid for that purpose. But if we cannot invalidate the model that way, the best way to test it is to make predictions with the model and see how they stand up against the actual behavior of the physical system we are trying to model. If we wanted to predict Earth's orbit around the Sun for the next one hundred years we would probably find that our original solar system model would do a bang-up job.
There may be other ways to validate our approximations, particularly if they can be quantified. For example, we can justify ignoring quantum effects in our solar system model because they simply won't show any noticeable effect at the scale we are examining. We can justify ignoring the gravitational forces exerted on the planets by stars other than the Sun because those forces are negligibly small. Again, though, whether or not these approximations are acceptable will depend on our purpose for using the model (just how precisely do you want to predict Earth's orbit?).
Now we're getting close to the main point I want to make. I hope it is clear that approximations pervade everything we do in physics. Approximations are such a fundamental part of doing physics that physicists tend to forget they are there. But all of physics is an approximation. It is simply not possible to do any physics without making some approximations. We must truncate the list of basic entities in our model (otherwise we would have to include everything in the Universe - and I don't just mean the visible Universe). We must truncate the list of properties that the basic entities possess (otherwise we must consider all possible properties - and who can say what unknown properties might be possessed by some object in the Universe?). Ideally we will make use of our best physical laws, but even these are almost certainly approximations.
I probably wouldn't get much argument from other physicists about what I've said so far, but now I'd like to claim something more controversial. I am convinced that understanding the role of approximation in physics makes it impossible to believe in a "final theory" (or at least impossible to believe that we will ever find the final theory). If we are willing to acknowledge that everything we now call physics is, at some level, an approximation then we must accept that any new theories must potentially be approximate as well. How would we be able to determine that a new theory was not an approximation, that it was in fact the "final" theory? Presumably we would use the theory (or model) to make predictions and then see if those predictions match our observations or experimental results (and I'll pretend for the moment that we can do this in some obvious way and actually know whether or not the predictions match the observations - reality is much trickier than this). But even if the match was perfect that would only validate the theory for that prediction (or at best for some class of predictions). We can only test the validity of an approximation in the context of some particular purpose. To show that a theory is universally valid (i.e that it is a final theory) we would have to show that it is valid for all possible purposes. I simply don't think that it is possible to do this. I'm really not even sure what it would mean to make the claim that a theory is valid for all possible purposes.
So I don't believe in a final theory. I believe that all of physics as it currently exists is an approximation, although an astoundingly good approximation for most of the purposes for which we have used it. I believe that all future physics will still be an approximation, although it will likely be a better approximation for some of our old purposes and a valid approximation for some purposes of which we have yet to conceive. But physics will always have a limited scope (I'll save a detailed discussion of the idea of scope for a later essay - or just read Thomas Brody). The only argument I can see against this point of view is that if we find a theory that works well for all purposes that we can think of now, then it will probably work well for all purposes we devise in the future. But evaluating a theory based on its "logical probability" does not work well (I'll try to write about this later, but Karl Popper has already made the point pretty well). I am convinced that any "Dreams of a Final Theory" will remain just that (with or without the Superconducting Supercollider).
P.S. Yes, I've read the book by Stephen Weinberg whose title I quote above. No, I am not trying to criticize that book or Weinberg himself. In fact, I had two semesters of quantum field theory from Weinberg when I was a grad student. The man clearly knows a lot more about physics than I do, and he has thought a lot more about philosophy than the vast majority of physicists. Even if, as a pond minnow, I wished to pick a fight with one of the great Leviathans of the sea, I would not choose to pick a fight with Weinberg!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment