I've been thinking a lot recently about how to teach science as a liberal art. I've argued elsewhere in this blog that science is a liberal art and has greater claim to that title than many disciplines (say, history for example) that are typically thought of as liberal arts. I still believe this to be true. However, I recognize that the way science is frequently (perhaps usually) taught tends to suck the liberal artness right out of science. Science, the way it is practiced by most scientists, IS a liberal art. It the activity of a free person, who engages in scientific research for the sheer joy of the intellectual endeavor. It is not something that one does for a paycheck. But many science courses are taught in such a way that only the utility of science is emphasized. Others even forgo highlighting the utility of science and instead present the material as a succession of facts to be memorized because they are "correct". Better science courses help there students learn to think like scientists, to engage at a basic level in the kinds of activities that scientists engage in. But few courses, I think, can really get students wrapped up in the experience of science. I don't think I've managed that feat myself, although I hold out hope for the future.
I think one reason science courses fail to fully immerse students in the scientific experience is that they cover too much material. Many science courses for non-science majors are what might be called "highlights" courses, which try to cover all of the important topics in the discipline. My physics course for non-science majors is a bit like this, although I have found myself cutting breadth to gain in depth. But the more I think about it the more I think there is a better way to teach science. Instead of giving students the "Cliff's Notes" version of the discipline, give them an excerpt of one of the really good parts. Pick a particular important discovery (or sequence of discoveries) and really delve into it. Present it historically, so that students learn about the errors and false starts as well as the great discoveries. A historical presentation also serves to highlight that science is a human activity, carried out by human beings not by computers or robots or mindless automatons.
I've reached this conclusions as a result of a confluence of several factors. The most important is the development of an astronomy course of this type (focusing on the Copernican Revolution) by a colleague of mine. The second factor is the departure of that same colleague to pursue another career, leaving me to teach his astronomy course. The third factor is that I recently read The Liberal Art of Science, a report from a committee of the AAAS. My departing colleague also taught some more standard astronomy "surveys" and I'm supposed to pick these up as well, but I just don't see myself teaching that type of astronomy course and I don't think such a course really teaches science as a liberal art (at least not as outlined in the AAAS report). In a way, I'm in an ideal situation for innovation. I'm an outsider to astronomy (although my undergraduate degree is in physics/astronomy and I did a bit of astronomy research as an undergrad) so I have no commitment to the status quo. I also have no commitment to particular pieces of the discipline. A well-trained professional astronomer probably feels like she is cheating her students if she doesn't teach them such-and-such. But I lack that training and thus those feelings. I'm free to develop a new astronomy course as I see fit. And so I intend to create a new course modeled on the style of my colleague's Copernican Revolution course, but with the discovery of galaxies as my topic (I'll also continue teaching The Copernican Revolution).
I may say more about this new course at a later date (when I've actually got some of it figured out), but for now I want to discuss the conclusions I have come to, in the process of thinking about this new astronomy course, about how science should be taught. As I said above I think science courses for non-science majors should focus on a fairly narrow topic, and take a historical approach. But it is essential that the course delve into not only WHAT the scientists discovered but how they discovered it and how others became convinced of their discovery. Students need to see that this process is far from straightforward. In fact, the best examples to present are discoveries that were controversial for years before finally becoming accepted (like the Copernican Revolution, only in that case it was centuries). Students should be given the chance to examine the evidence on both sides. In the process they should see that there are often legitimate objections to controversial new ideas (like Copernicus' idea) but that in some cases these ideas are able to overcome those objections and become part of accepted science. They should see what it takes for a controversial theory to succeed. They should be exposed to the problems, the mistakes, and the political maneuvering that plague a controversial hypothesis. And ultimately they should have a strong understanding of why the idea ultimately won acceptance.
To do all of these things students must "get their hands dirty". They must carry out experiments and make observations. Reading the results of someone else's experiment is simply not as compelling as conducting the experiment yourself. Of course, in some cases they will be unable to perform the experiment themselves. Simulations can work well in such situations, but if no simulations is available then students will have to read about it. But whenever possible they should read primary sources. For the astronomy course I am developing I am convinced that my students can handle reading a few articles from the Astrophysical Journal, as well as some more historical material from the publications of the Royal Society of London. Original research articles on the history of science can also be of great use. I intend to have students re-analyze published data (after all, we won't have the Mount Wilson 100-inch telescope to play around with like Hubble did) and try to draw their own conclusions, then compare their findings to those of the original author.
Of course, there needs to be some time for discussion and synthesis as well. Even a narrowly defined topic will have many strands of evidence that ultimately braid together to make the case for the new discovery or new theory. Students should delve as deeply as possible into several of these strands, but they also need time to do the braiding and see how the different strands tie together (or fail to tie together in some cases). Ideally there should be some strands of evidence that contradict each other (that is the case for the Copernican Revolution, where evidence from the physics of the time flatly contradicted the idea of a moving Earth). Such contradictory evidence creates a tension that must be resolved. Science strives for internal consistency and unity. This aspect of science is often left out of courses, because we never show the students the evidence that turned out to be "wrong".
All of these things take time. You can't conduct your own experiments, read primary sources, delve into the history of the discoveries, explore multiple strands of evidence for a theory, and synthesize all of this into a unified whole and still cover every important theory in the discipline in a single semester. But this is the essence of science. Science is not, ultimately, about what we know right now. What we know now will be supplanted in the future. Science is about how we come to know things at all. And students should be encouraged to revel in the fact that we ARE able to know things, things that it might seem would be impossible for us to know. How could we, stuck here on our little planet, ever learn that there are other galaxies composed of billions of stars that are billions of light years away from our own galaxy? How could we ever know that the entire Universe is expanding? Isn't it mind-boggling that we possibly say we "know" these things? And yet, these great pieces of knowledge are built up out of a series of much smaller, and much more believable pieces. Students need to see how those small pieces fit together to form the grand (but very incomplete) puzzle of modern science. Surely we would prefer to read a single scene from a Shakespeare play (Hamlet and Ophelia in Act III, scene i, perhaps, or the hysterical play within the play that is Act V of A Midsummer Night's Dream) rather than read a synopsis of the plot. I think the same is true for science. If we want students to really see what science is all about they must be offered a tasty delicacy, not fed fast-food.
Well, those are my thoughts. Now I need to go ... I think I hear hoofs clattering on my rooftop.
Monday, December 24, 2007
Tuesday, December 18, 2007
Kuhn's "Copernican Revolution" and Incommensurability
It's been ages since my last post. I hit a point in the semester where I was sufficiently far behind so as to preclude any thoughts of essay-writing for this blog. But now the holidays have arrived and I have a backlog of topics to write about. Fortunately my reading did not halt when my blogging did...
In the time since my last post I finished reading Thomas Kuhn's "The Copernican Revolution." It's an incredibly good read for anyone interested in intellectual history, and particularly the history of astronomy. I was very motivated to read it because I will be teaching astronomy starting next Fall, and I intend to teach a course developed by a colleague that focuses on the Copernican Revolution. I was also interested in the book because I had heard that Kuhn's work on the Copernican Revolution had ultimately led him to the conclusions about the nature of science that he presents in his "The Structure of Scientific Revolutions." In particular I was interested to see the origins of his idea of incommensurability (the idea that there is no logical way to decide between two competing paradigms because each paradigm has different standards of evidence and makes different fundamental assumptions that cannot be questioned within the paradigm).
What struck me most about Kuhn's presentation of Aristotelian cosmology was how sensible it ancient science was. Sure, I know that most of it has now been discredited. But Kuhn did a great job of showing how well the Ptolemaic/Aristotelian system explained much of what was "known" at the time (some of what was "known" turned out to be wrong as well, but they couldn't anticipate that then). There was also a great deal of internal consistency in ancient science, and in fact it was this internal consistency that produced much of the scientific resistance to Copernicus' proposals. Making Earth a planet did not just change astronomy, but it also had an impact that would be felt through all of physics as well as in other areas. If ancient science had been a collection of ad hoc ideas then there would have been little resistance to Copernicus since his ideas would have impacted only the highly specialized area of mathematical astronomy (in which Copernicus was a recognized leader). I was also impressed by how far medieval science advanced beyond the ideas of Aristotle. In particular, Oresme and Buridan were on the verge of the concept of momentum and something like Newton's Second Law. Kuhn also points out that Descartes was the first to clearly formulate a Law of Inertia. This makes the work of Galileo and Newton somewhat less revolutionary than I had thought (though still incredibly revolutionary).
Overall I just can't see where Kuhn got the idea of incommensurability from. It just doesn't seem to be there in this book. He goes to great lengths to point out that Copernicus himself was a die-hard Aristotelian in almost all of his thinking except the planetary nature of Earth. Tycho Brahe was of a similar frame of mind. Kepler was not Aristotelian, and his general approach was quite different from that of most of his contemporaries. But Kepler was just one of the first to ride the wave of neoPlatonism. Kunh readily admits that Kepler's explanation of planetary motion would have won over professional astronomers without any additional evidence. His predictions were simply more accurate than those of anyone else, and this was what counted for professional astronomers. Note that this was a common piece of evidence that both geocentrists and heliocentrists could agree on. There is no incommensurability there. Granted, Kepler's work was unlikely to win over the general populace to the heliocentric model. But that is a process that lies beyond the realms of science itself.
I've always heard of one example of incommensurability being the refusal of anti-Copernicans to admit telescopic evidence as valid. This is a disagreement over what constitutes valid evidence, but it is a scientifically legitimate disagreement. Galileo was the first to use a telescope for astronomy, and the science of optics was new on the scene. It is no surprise that some scientists viewed telescopes with suspicion. It was an as yet unproven technology. If those same scientists had lived long enough to see telescopes and other optical devices in common use they doubtless would have conceded that Galileo's evidence was valid. This is not a matter of incommensurable paradigms, but rather an appropriate cautiousness with regard to a completely new technology. Frankly, there were a wide variety of scientific reasons for rejecting Copernicus' system. For one thing, it wasn't any better than Ptolemy's, as Kuhn points out. For another, it required the dismantling of virtually all the physics that was known at the time. It turns out this was a good thing because that physics was wrong, but it was surely reasonable for Copernicus' contemporaries to hesitate to throw away what they knew of physics for something that would bring them little or no gain. Copernicus himself knew his theory had major problems and expected it to be criticized (which is why he resisted publishing it until his death). There was a lot that needed to be worked out before the benefits of the Copernican idea could be reaped.
Perhaps a genuine incommensurability lies in how various astronomers judged Copernicus' theory. To those with an empirical, Aristotelian viewpoint it could only be deemed a failure or at best a "nice try." To those with a more Platonic perspective (like Kepler) the theory had much to credit it. It was conceptually more economical than Ptolemy's system, even though this conceptual economy had to be covered over with ad hoc additions to make the predictions match the level of accuracy of the Ptolemaic system. But this difference in perspective does not represent an incommensurability between two scientific paradigms. Rather, it seems to be a possible incommensurability between individual scientists who may place different value on different types of evidence. Differences between individual scientists have been around as long as science has. Kuhn is claiming something much larger in "Structure" then that sometimes scientists disagree with each other.
I wonder if a similar examination of a smaller-scale scientific revolution would have led Kuhn away from the idea of incommensurability. The Copernican revolution involved many philosophical the theological issues in addition to the scientific issues. Copernicus' idea ultimately overthrew a worldview that had dominated Western thought for millenia. The revolution itself spanned a long period of time (from Copernicus to Newton) and it came at a time when great technical advances were made (though this may be typical of any important scientific revolution). As Kuhn points out, the backlash against Copernicus' ideas was driven in part by the fundamentalism of the new Protestant faith and the need for the Catholic Church to find a target to attack in order to show that it was not lax about biblical authority. The examination of a similar revolution that did not have all of these complicating factors might not lead to the idea of incommensurability. An example that comes to mind (because I've been studying it recently) is the revolution that saw our Sun moved from the center of the Universe to out near the edge of one spiral galaxy among billions. There were issues of evidence here as well, particularly in regard to the Cepheid variable period-luminosity relation and van Maanen's measures of the rotation of spiral nebulae, and as a result astronomers disagreed on some major points (such as whether spiral nebulae were inside or outside our galaxy). Ultimately, though, a consensus was reached and the main players on both sides of the debate came to the same conclusions in the end. No incommensurability there, it seems.
In the time since my last post I finished reading Thomas Kuhn's "The Copernican Revolution." It's an incredibly good read for anyone interested in intellectual history, and particularly the history of astronomy. I was very motivated to read it because I will be teaching astronomy starting next Fall, and I intend to teach a course developed by a colleague that focuses on the Copernican Revolution. I was also interested in the book because I had heard that Kuhn's work on the Copernican Revolution had ultimately led him to the conclusions about the nature of science that he presents in his "The Structure of Scientific Revolutions." In particular I was interested to see the origins of his idea of incommensurability (the idea that there is no logical way to decide between two competing paradigms because each paradigm has different standards of evidence and makes different fundamental assumptions that cannot be questioned within the paradigm).
What struck me most about Kuhn's presentation of Aristotelian cosmology was how sensible it ancient science was. Sure, I know that most of it has now been discredited. But Kuhn did a great job of showing how well the Ptolemaic/Aristotelian system explained much of what was "known" at the time (some of what was "known" turned out to be wrong as well, but they couldn't anticipate that then). There was also a great deal of internal consistency in ancient science, and in fact it was this internal consistency that produced much of the scientific resistance to Copernicus' proposals. Making Earth a planet did not just change astronomy, but it also had an impact that would be felt through all of physics as well as in other areas. If ancient science had been a collection of ad hoc ideas then there would have been little resistance to Copernicus since his ideas would have impacted only the highly specialized area of mathematical astronomy (in which Copernicus was a recognized leader). I was also impressed by how far medieval science advanced beyond the ideas of Aristotle. In particular, Oresme and Buridan were on the verge of the concept of momentum and something like Newton's Second Law. Kuhn also points out that Descartes was the first to clearly formulate a Law of Inertia. This makes the work of Galileo and Newton somewhat less revolutionary than I had thought (though still incredibly revolutionary).
Overall I just can't see where Kuhn got the idea of incommensurability from. It just doesn't seem to be there in this book. He goes to great lengths to point out that Copernicus himself was a die-hard Aristotelian in almost all of his thinking except the planetary nature of Earth. Tycho Brahe was of a similar frame of mind. Kepler was not Aristotelian, and his general approach was quite different from that of most of his contemporaries. But Kepler was just one of the first to ride the wave of neoPlatonism. Kunh readily admits that Kepler's explanation of planetary motion would have won over professional astronomers without any additional evidence. His predictions were simply more accurate than those of anyone else, and this was what counted for professional astronomers. Note that this was a common piece of evidence that both geocentrists and heliocentrists could agree on. There is no incommensurability there. Granted, Kepler's work was unlikely to win over the general populace to the heliocentric model. But that is a process that lies beyond the realms of science itself.
I've always heard of one example of incommensurability being the refusal of anti-Copernicans to admit telescopic evidence as valid. This is a disagreement over what constitutes valid evidence, but it is a scientifically legitimate disagreement. Galileo was the first to use a telescope for astronomy, and the science of optics was new on the scene. It is no surprise that some scientists viewed telescopes with suspicion. It was an as yet unproven technology. If those same scientists had lived long enough to see telescopes and other optical devices in common use they doubtless would have conceded that Galileo's evidence was valid. This is not a matter of incommensurable paradigms, but rather an appropriate cautiousness with regard to a completely new technology. Frankly, there were a wide variety of scientific reasons for rejecting Copernicus' system. For one thing, it wasn't any better than Ptolemy's, as Kuhn points out. For another, it required the dismantling of virtually all the physics that was known at the time. It turns out this was a good thing because that physics was wrong, but it was surely reasonable for Copernicus' contemporaries to hesitate to throw away what they knew of physics for something that would bring them little or no gain. Copernicus himself knew his theory had major problems and expected it to be criticized (which is why he resisted publishing it until his death). There was a lot that needed to be worked out before the benefits of the Copernican idea could be reaped.
Perhaps a genuine incommensurability lies in how various astronomers judged Copernicus' theory. To those with an empirical, Aristotelian viewpoint it could only be deemed a failure or at best a "nice try." To those with a more Platonic perspective (like Kepler) the theory had much to credit it. It was conceptually more economical than Ptolemy's system, even though this conceptual economy had to be covered over with ad hoc additions to make the predictions match the level of accuracy of the Ptolemaic system. But this difference in perspective does not represent an incommensurability between two scientific paradigms. Rather, it seems to be a possible incommensurability between individual scientists who may place different value on different types of evidence. Differences between individual scientists have been around as long as science has. Kuhn is claiming something much larger in "Structure" then that sometimes scientists disagree with each other.
I wonder if a similar examination of a smaller-scale scientific revolution would have led Kuhn away from the idea of incommensurability. The Copernican revolution involved many philosophical the theological issues in addition to the scientific issues. Copernicus' idea ultimately overthrew a worldview that had dominated Western thought for millenia. The revolution itself spanned a long period of time (from Copernicus to Newton) and it came at a time when great technical advances were made (though this may be typical of any important scientific revolution). As Kuhn points out, the backlash against Copernicus' ideas was driven in part by the fundamentalism of the new Protestant faith and the need for the Catholic Church to find a target to attack in order to show that it was not lax about biblical authority. The examination of a similar revolution that did not have all of these complicating factors might not lead to the idea of incommensurability. An example that comes to mind (because I've been studying it recently) is the revolution that saw our Sun moved from the center of the Universe to out near the edge of one spiral galaxy among billions. There were issues of evidence here as well, particularly in regard to the Cepheid variable period-luminosity relation and van Maanen's measures of the rotation of spiral nebulae, and as a result astronomers disagreed on some major points (such as whether spiral nebulae were inside or outside our galaxy). Ultimately, though, a consensus was reached and the main players on both sides of the debate came to the same conclusions in the end. No incommensurability there, it seems.
Monday, October 8, 2007
Time and Lengths Scales for Scientific Theories
As noted in an earlier post, I am in the process of reading (and thoroughly enjoying) N. David Mermin's Boojums All the Way Through. In a couple of his essays Mermin emphasizes the incredible successes of quantum mechanics. He mentions the fact that quantum mechanics was born in 1900 (on December 14, exactly 73 years before I was born!) and that even now (he published one of these essays in 1988, but we'll update it to 2007) there are no signs that quantum mechanics is incorrect. So we've had over 100 years to overthrow the theory, or at least build some solid evidence against it, and nothing has happened. Mermin also points out that the quantum theory was developed to explain atomic processes that have a characteristic length scale of 10^-9 meters or so, but that the theory has been extended to the much smaller length scales of subatomic particles (for this we can use the length scale for weak interactions, something like 10^-17 meters). So quantum mechanics has proven successful over length scales spanning 8 orders of magnitude.
This got me thinking: Is this really all that impressive? How does it stack up to the run that classical physics had? Not very well, it turns out. Classical physics was born, to give a conservative estimate, with the publication of Newton's Principia in 1687. Classical physics was essentially unchallenged until Planck's lecture on December 14, 1900. It wasn't SERIOUSLY challenged until Einstein's "On the Electrodynamics of Moving Bodies" in 1905. That's a span of well over 300 years. As for length scales, classical mechanics was primarily devised to account for the motions of the planets in the solar system. The diameter of the solar system is on the order of 10^16 meters. Of course, Newtonian mechanics also turned out to work pretty well for small objects (let's say on the scale of 1 mm - again a conservative estimate since classical physics surely works quite well at the micrometer scale and even somewhat smaller). This conservative estimate gives us length scales spanning 19 orders of magnitude. So it looks like quantum mechanics has a long way to go before it could be considered as "successful" as classical mechanics according to these measures. But eventually we did concede that classical mechanics was not the final correct theory. Why should we then accept quantum mechanics as the final correct theory?
Now, this is not to say that I think quantum mechanics is wrong in some specific way. It really has looked pretty good so far. But the inductive argument doesn't work - just because it hasn't failed yet doesn't mean it never will. As scientists we should not consider ANY theory to be final, regardless of its level of success (as measured by time, length, or any other means). There is no doubt that quantum mechanics is impressive. The fact that it has challenged some of our basic notions of the nature of physical reality indicates, I think, that it gets at something very deep. But we also should not refuse to question quantum mechanics when the answers it provides are less than satisfying.
There is no doubt that quantum mechanics represents the best theory available right now. We should continue to use it, and to extend it in interesting ways (i.e. quantum field theory, quantum gravity if we can manage it, etc.). But the same was true of classical mechanics long ago when it was being extended in interesting ways by the likes of Lagrange, Maupertuis, and Hamilton. Let us not be guilty of the hubris of Lord Kelvin who declared at the end of the Nineteenth Century that physics was all but finished. Nature may still have some surprises in store for us, and since we cannot know if we will be surprised we must always remain open to the possibility.
This got me thinking: Is this really all that impressive? How does it stack up to the run that classical physics had? Not very well, it turns out. Classical physics was born, to give a conservative estimate, with the publication of Newton's Principia in 1687. Classical physics was essentially unchallenged until Planck's lecture on December 14, 1900. It wasn't SERIOUSLY challenged until Einstein's "On the Electrodynamics of Moving Bodies" in 1905. That's a span of well over 300 years. As for length scales, classical mechanics was primarily devised to account for the motions of the planets in the solar system. The diameter of the solar system is on the order of 10^16 meters. Of course, Newtonian mechanics also turned out to work pretty well for small objects (let's say on the scale of 1 mm - again a conservative estimate since classical physics surely works quite well at the micrometer scale and even somewhat smaller). This conservative estimate gives us length scales spanning 19 orders of magnitude. So it looks like quantum mechanics has a long way to go before it could be considered as "successful" as classical mechanics according to these measures. But eventually we did concede that classical mechanics was not the final correct theory. Why should we then accept quantum mechanics as the final correct theory?
Now, this is not to say that I think quantum mechanics is wrong in some specific way. It really has looked pretty good so far. But the inductive argument doesn't work - just because it hasn't failed yet doesn't mean it never will. As scientists we should not consider ANY theory to be final, regardless of its level of success (as measured by time, length, or any other means). There is no doubt that quantum mechanics is impressive. The fact that it has challenged some of our basic notions of the nature of physical reality indicates, I think, that it gets at something very deep. But we also should not refuse to question quantum mechanics when the answers it provides are less than satisfying.
There is no doubt that quantum mechanics represents the best theory available right now. We should continue to use it, and to extend it in interesting ways (i.e. quantum field theory, quantum gravity if we can manage it, etc.). But the same was true of classical mechanics long ago when it was being extended in interesting ways by the likes of Lagrange, Maupertuis, and Hamilton. Let us not be guilty of the hubris of Lord Kelvin who declared at the end of the Nineteenth Century that physics was all but finished. Nature may still have some surprises in store for us, and since we cannot know if we will be surprised we must always remain open to the possibility.
Saturday, October 6, 2007
Creativity in Physics and Literature
I've been reading N. David Mermin's Boojums All the Way Through, a collection of his essays, articles, and book reviews. One of the book reviews is of a biography of Lev Landau, and one of the nuggets that Mermin extracts for the reader is Landau's logarithmic scale for rating physicists. Einstein apparently received a special rating of 0.5, while the (other) founders of quantum mechanics (Bohr, Heisenberg, Schroedinger) rate a 1. Landau apparently gave himself a 2.5 but later upgraded this to 2. Mermin, later in his book, describes himself as a 4.5. Landau apparently referred to physicists who rate a 5 (the worst score on his scale) "pathologists".
Reading about Landau's rating system has caused me to reflect on creativity in science. After all, what is it that distinguishes a 1 (or even a 0.5!) from a 5 in Landau's scale? I would argue that it is most certainly creativity. It surely is not hard work, for though the great physicists were passionate about there subject and thus undoubtedly worked quite hard I am certain that many who have worked as hard or harder still rate but a 5. It cannot be anything like mathematical ability. Einstein's self-reported troubles with math are well-known, and Bohr was apparently wretched at doing serious calculation. I suppose one might site physical intuition as the determining factor, but what exactly does that mean? Physics professors usually mean by "physical intuition" a certain level of internalizing of the known laws of physics. But the great physicists were great specifically because their thinking was NOT limited by an internalization of the known laws. They were able to see beyond what was known. I don't know what else to call this but creativity.
Creativity is an issue that has been much on my mind as I study the philosophy of science. Philosophers of science tend to ignore the creative aspect of science. I think this is for two reasons. First of all, philosophy of science is most often an attempt to rationally reconstruct the actual activities of science. How can one rationally reconstruct an act of creativity? Second, philosophers of science tend to focus on how scientific theories are tested and how the decision to modify a theory comes about. They focus much less on how new theories are constructed or how old theories are modified. The point of the philosophy of science is not so much to explain how scientists construct theories as it is to explain how we can make sure those theories are legitimate (or useful, or not blatantly wrong, or not entirely metaphysical). The creative act of science is thus outside the purview of philosophy of science. The main exception I can think of would be the early inductivists, who saw theories as generalizations from the data. But I don't think anyone would seriously argue that, even when such generalizations do occur (and I think they occur infrequently), this process is straightforward or simple.
But even if philosophers of science can't explain scientific creativity, is it possible to classify it in some way? It seems to me that scientific creativity is much like creativity in other areas. I'd like to draw some comparisons between physics and literature to illustrate this point. Let me make clear at the outset that I am no expert on English literature, and I am largely ignorant of non-English literature. But hopefully if I stick to things that I have read and that I know are widely acclaimed, I'll muddle through this without offending anyone.
Most of science is performed by people who are creative only on a small scale, in a somewhat workaday fashion. These would be Landau's 5's (I would rank myself among them, if I deemed myself worthy of any rank at all). Creating new scientific knowledge of any kind requires a certain level of creativity, in that you are doing something that has not been done before and therefore you cannot follow any sort of template. Probably most of the creativity actually comes in formulating a question to study, or at least choosing some investigation to perform (I rarely have a well-formulated question when I begin my research, but I usually do have some idea of something into which I want to poke my nose). Starting a research project involves a suite of choices that cannot usually be guided by established principles. In any field of science there are an infinite number of factors that can be analyzed, or relations that can be investigated. For most scientists the creative act comes in choosing which of these infinite possibilities will be productive or interesting. From that point on the work may involve only well-trodden pathways. I think this is something like most popular fiction. The author must come up with an idea for a novel or story, and if they are to avoid charges of plagiarism it must be an idea that is new on some level. But the typical work of popular fiction is pretty similar to something that has gone before, and both the prose and the literary conceits are likely to be standard fare. If Landau had rated novelists he would probably consider most authors on the NY Times Bestseller list to be "pathologists".
Somewhere much farther up the chain come those who extend the boundaries of the field in a significant way. This can be done by breaking new ground, or by finding hitherto unknown connections between disparate areas. In physics this might include those like Dirac or Feynman, who worked within the framework of quantum mechanics but extended that framework into new and unexpected territories (Poincare would be a similar example in the realm of classical physics). For an old school example, Johannes Kepler might fall into this category since we worked within a framework established by Copernicus but made a crucial extension (to elliptical orbits) that turned out to make all the difference. The great unifiers would also fit in this category (here I am thinking mainly of Weinberg, Glashow, and Salam for their unification of electromagnetic and weak interactions, but classical physicists like Lagrange and Hamilton might also fit this category, as might James Clerk Maxwell). In my limited knowledge of literature I would put Vladimir Nabokov in this category. Lolita was not, really, an entirely new type of novel. But it was about things that no novel had been about before. Similarly, Pale Fire turns a poem (and its exegesis) into a novel and thereby creates a connection that had not been exploited before. Note that this categorization deals only with creativity. Dirac and Feynman both possessed incredible technical prowess in addition to their creativity. Similarly, Nabokov's prose is nothing short of breathtaking. Perhaps it is impossible to separate these attributes from creativity, but I am at least not explicitly taking them into account here.
At or near the top of the creativity scale are those who change the face of their fields forever. In physics this would include Landau's 1's: Bohr, Heisenberg, Schroedinger. I'd add Boltzmann and Faraday. To go way back we could add Galileo and Copernicus in this category. These people gave us a new way of understanding the natural world. They had the vision to see far beyond the existing theories, and the courage and creativity to construct something radically new. I suppose Joyce would be in this category for literature (though I must confess that I have only read Portrait of the Artist as a Young Man, and parts of Dubliners - I'll get to Ulysses someday soon, but I may not be strong enough for Finnegan's Wake). I'd like to put Borges in this category as well (you see, I have read some non-English authors) because I think his creativity merits it, even if his prose does not (but then, I didn't read his work in Spanish so I can't really say). These authors wrote works that departed radically from the conventions of the day, and literature has not been the same since.
Now, what about that special case of Landau's: Albert Einstein, who merits a 1/2. I would argue that Isaac Newton merits the same special score. What author could merit such a special distinction? William Shakespeare? Fyodor Dostoyevsky? (OK, I admit that I'm trying to atone for my English-language leanings here.) I will let others more literate than I make that call. What is it that sets these people apart from the 1's? Again, I believe it is creativity but it is a level of creativity that inspires awe. The work of a 5 may lead one to think "I would have thought of that if I had worked on that problem." The work of, say, a 2 might lead on to think "I wish I could have thought of that, but I doubt I would have." In the case of the 1's, we might think "I can't believe they thought of that, they are geniuses." For those in this special category our thoughts are mute and we are left to gaze in awe at a mind that operates on a level entirely different from our own.
I'll close this essay by pointing out some interesting features of what I have just written. It strikes me as curious that all of the physicists I mention are theorists, not experimentalists (except Galileo and Faraday, who were both). The authors I mention are all prose authors (well, except Shakespeare). The second fact follows directly from my own personal prejudices (I prefer prose to poetry), which in turn have influenced what I have read. But the exclusion of experimentalists seems odd to me in retrospect, and it would be false to claim that I am simply unaware of highly creative experimental work. Millikan was incredibly creative, as was Michelson. James Joule certainly deserves some high marks for his creativity in studying the relation between heat and mechanical energy. In fact, I think one could argue that in recent years experimentalists have demonstrated a higher level of creativity than have theorists. But somehow this seems like a different type of creativity. For one thing, it is highly constrained creativity in that experiments must make use of apparatus that either exists or can be built with a reasonable investment of time and money. Theoretical creativity is largely free from such practical constraints. Furthermore, experimental greatness requires a set of skills that are not specifically intellectual. Perhaps one day I will write an essay comparing the great experimental physicists to the great painters. Could I, then, rate da Vinci in the same category as himself? Probably not - he was much better as a painter than as an experimental physicist, although this was probably not due to a lack of creativity. Anyway, until I write that essay I will simply apologize to the experimentalists and try to redirect the blame toward Lev Landau who got me started thinking about all of this anyway.
Reading about Landau's rating system has caused me to reflect on creativity in science. After all, what is it that distinguishes a 1 (or even a 0.5!) from a 5 in Landau's scale? I would argue that it is most certainly creativity. It surely is not hard work, for though the great physicists were passionate about there subject and thus undoubtedly worked quite hard I am certain that many who have worked as hard or harder still rate but a 5. It cannot be anything like mathematical ability. Einstein's self-reported troubles with math are well-known, and Bohr was apparently wretched at doing serious calculation. I suppose one might site physical intuition as the determining factor, but what exactly does that mean? Physics professors usually mean by "physical intuition" a certain level of internalizing of the known laws of physics. But the great physicists were great specifically because their thinking was NOT limited by an internalization of the known laws. They were able to see beyond what was known. I don't know what else to call this but creativity.
Creativity is an issue that has been much on my mind as I study the philosophy of science. Philosophers of science tend to ignore the creative aspect of science. I think this is for two reasons. First of all, philosophy of science is most often an attempt to rationally reconstruct the actual activities of science. How can one rationally reconstruct an act of creativity? Second, philosophers of science tend to focus on how scientific theories are tested and how the decision to modify a theory comes about. They focus much less on how new theories are constructed or how old theories are modified. The point of the philosophy of science is not so much to explain how scientists construct theories as it is to explain how we can make sure those theories are legitimate (or useful, or not blatantly wrong, or not entirely metaphysical). The creative act of science is thus outside the purview of philosophy of science. The main exception I can think of would be the early inductivists, who saw theories as generalizations from the data. But I don't think anyone would seriously argue that, even when such generalizations do occur (and I think they occur infrequently), this process is straightforward or simple.
But even if philosophers of science can't explain scientific creativity, is it possible to classify it in some way? It seems to me that scientific creativity is much like creativity in other areas. I'd like to draw some comparisons between physics and literature to illustrate this point. Let me make clear at the outset that I am no expert on English literature, and I am largely ignorant of non-English literature. But hopefully if I stick to things that I have read and that I know are widely acclaimed, I'll muddle through this without offending anyone.
Most of science is performed by people who are creative only on a small scale, in a somewhat workaday fashion. These would be Landau's 5's (I would rank myself among them, if I deemed myself worthy of any rank at all). Creating new scientific knowledge of any kind requires a certain level of creativity, in that you are doing something that has not been done before and therefore you cannot follow any sort of template. Probably most of the creativity actually comes in formulating a question to study, or at least choosing some investigation to perform (I rarely have a well-formulated question when I begin my research, but I usually do have some idea of something into which I want to poke my nose). Starting a research project involves a suite of choices that cannot usually be guided by established principles. In any field of science there are an infinite number of factors that can be analyzed, or relations that can be investigated. For most scientists the creative act comes in choosing which of these infinite possibilities will be productive or interesting. From that point on the work may involve only well-trodden pathways. I think this is something like most popular fiction. The author must come up with an idea for a novel or story, and if they are to avoid charges of plagiarism it must be an idea that is new on some level. But the typical work of popular fiction is pretty similar to something that has gone before, and both the prose and the literary conceits are likely to be standard fare. If Landau had rated novelists he would probably consider most authors on the NY Times Bestseller list to be "pathologists".
Somewhere much farther up the chain come those who extend the boundaries of the field in a significant way. This can be done by breaking new ground, or by finding hitherto unknown connections between disparate areas. In physics this might include those like Dirac or Feynman, who worked within the framework of quantum mechanics but extended that framework into new and unexpected territories (Poincare would be a similar example in the realm of classical physics). For an old school example, Johannes Kepler might fall into this category since we worked within a framework established by Copernicus but made a crucial extension (to elliptical orbits) that turned out to make all the difference. The great unifiers would also fit in this category (here I am thinking mainly of Weinberg, Glashow, and Salam for their unification of electromagnetic and weak interactions, but classical physicists like Lagrange and Hamilton might also fit this category, as might James Clerk Maxwell). In my limited knowledge of literature I would put Vladimir Nabokov in this category. Lolita was not, really, an entirely new type of novel. But it was about things that no novel had been about before. Similarly, Pale Fire turns a poem (and its exegesis) into a novel and thereby creates a connection that had not been exploited before. Note that this categorization deals only with creativity. Dirac and Feynman both possessed incredible technical prowess in addition to their creativity. Similarly, Nabokov's prose is nothing short of breathtaking. Perhaps it is impossible to separate these attributes from creativity, but I am at least not explicitly taking them into account here.
At or near the top of the creativity scale are those who change the face of their fields forever. In physics this would include Landau's 1's: Bohr, Heisenberg, Schroedinger. I'd add Boltzmann and Faraday. To go way back we could add Galileo and Copernicus in this category. These people gave us a new way of understanding the natural world. They had the vision to see far beyond the existing theories, and the courage and creativity to construct something radically new. I suppose Joyce would be in this category for literature (though I must confess that I have only read Portrait of the Artist as a Young Man, and parts of Dubliners - I'll get to Ulysses someday soon, but I may not be strong enough for Finnegan's Wake). I'd like to put Borges in this category as well (you see, I have read some non-English authors) because I think his creativity merits it, even if his prose does not (but then, I didn't read his work in Spanish so I can't really say). These authors wrote works that departed radically from the conventions of the day, and literature has not been the same since.
Now, what about that special case of Landau's: Albert Einstein, who merits a 1/2. I would argue that Isaac Newton merits the same special score. What author could merit such a special distinction? William Shakespeare? Fyodor Dostoyevsky? (OK, I admit that I'm trying to atone for my English-language leanings here.) I will let others more literate than I make that call. What is it that sets these people apart from the 1's? Again, I believe it is creativity but it is a level of creativity that inspires awe. The work of a 5 may lead one to think "I would have thought of that if I had worked on that problem." The work of, say, a 2 might lead on to think "I wish I could have thought of that, but I doubt I would have." In the case of the 1's, we might think "I can't believe they thought of that, they are geniuses." For those in this special category our thoughts are mute and we are left to gaze in awe at a mind that operates on a level entirely different from our own.
I'll close this essay by pointing out some interesting features of what I have just written. It strikes me as curious that all of the physicists I mention are theorists, not experimentalists (except Galileo and Faraday, who were both). The authors I mention are all prose authors (well, except Shakespeare). The second fact follows directly from my own personal prejudices (I prefer prose to poetry), which in turn have influenced what I have read. But the exclusion of experimentalists seems odd to me in retrospect, and it would be false to claim that I am simply unaware of highly creative experimental work. Millikan was incredibly creative, as was Michelson. James Joule certainly deserves some high marks for his creativity in studying the relation between heat and mechanical energy. In fact, I think one could argue that in recent years experimentalists have demonstrated a higher level of creativity than have theorists. But somehow this seems like a different type of creativity. For one thing, it is highly constrained creativity in that experiments must make use of apparatus that either exists or can be built with a reasonable investment of time and money. Theoretical creativity is largely free from such practical constraints. Furthermore, experimental greatness requires a set of skills that are not specifically intellectual. Perhaps one day I will write an essay comparing the great experimental physicists to the great painters. Could I, then, rate da Vinci in the same category as himself? Probably not - he was much better as a painter than as an experimental physicist, although this was probably not due to a lack of creativity. Anyway, until I write that essay I will simply apologize to the experimentalists and try to redirect the blame toward Lev Landau who got me started thinking about all of this anyway.
Monday, October 1, 2007
Interpretating Quantum Mechanics
I just got my copy of the October American Journal of Physics (the best, though not the most prestigious, physics journal in the world). The Letters to the Editor section contains a letter by Art Hobson, written in response to a book review by N. David Mermin. The book that Mermin reviewed was Quantum Enigma by Rosenblum and Kuttner. I've not read the book myself, but I did read Mermin's review. One of his chief complaints (though not his only complaint, nor was his review wholly critical) was that in discussing various interpretations of quantum mechanics Rosenblum and Kuttner ignore the view that quantum states represent not physical states of a particle but rather states of our knowledge. Hobson rejects this view (as well as the view, evidently emphasized by Rosenblum and Kuttner, that perception of a measurement result by a conscious entity brings about a collapse of the wavefunction).
Now I have a great deal of admiration for both of the participants here. I am in the process of reading Mermin's Boojums All the Way Through. Mermin is without question the best prose stylist in physics (and apparently a major contributor in condensed matter physics, though that's not my field so I can hardly judge). Hobson, on the other hand, has been a champion for the social relevance of physics and for the teaching of physics to non-science students. I use his Physics: Concepts and Conncetions textbook for my liberal-arts physics course. While I would have read any letters on interpreting quantum mechanics with interest, the name recognition definitely made these letters stand out to me.
Hobson claims the view that quantum states are states of knowledge rather than states of some objective physical reality is an unnecessary extravagance. He argues that the analysis should really be done from the perspective of quantum field theory, and that most physicists certainly believe that quantum fields are objectively real (offering a quote from Weinberg that I have seen him use before). He then goes on to explain how decoherence explains how a quantum superposition can be transformed, through interaction between the quantum system and its environment, into an incoherent state that can be described with a diagonal density operator. Hobson then declares that these incoherent states are no more mysterious than the proposition that there is a 0.5 probability that a coin flip will come up heads.
I find this last comment by Hobson particularly interesting in light of the position he is attacking. He wants to avoid the claim that quantum states are states of knowledge, but yet he is reduced to saying that quantum probabilities are just like the probabilities involved in flipping a coin. But classical probabilities, like those for a coin toss, are invoked exactly because we lack knowledge. The equal probability of getting heads or tails when a coin is flipped does not represent anything objectively real about the state of the coin on a given flip. What it represents is the state of our knowledge about the coin. If we knew a great deal more about the coins initial state, and about all the forces that act upon the coin, we could determine with near certainty which side of the coin would land up. It is only because we are ignorant of all this information that we must resort to probabilities. So Hobson's invocation of decoherence seems to support Mermin's view, rather than refute it. Indeed, decoherence can only be deemed to have fully solved the measurement problem if quantum states are only states of knowledge (because it reduces the quantum superposition to a classical mixture). If we believe that quantum states represent an objective reality then we are left wondering why decoherence fails to produce a single outcome (rather than a classical mixture of various outcomes). Certainly when we do measurements in the lab we get a single outcome each time (though not the same outcome every time we repeat the measurement).
I also find Hobson's reliance on quantum field theory to be a little problematic. Not so much for technical reasons as for pedagogical reasons. In fact, I have avoided moving to the new edition of his text in part because of this. It is not clear to me that all of the mysteries of quantum mechanics can be swept under the rug of quantum field theory. Quantum field theory has been very successful in describing a rather limited range of phenomena. But it's not clear to me that quantum field theory, as a model of physical interactions, completely contains and therefore exceeds non-relativistic quantum mechanics. In the same way, I have yet to be fully convinced that quantum mechanics completely contains classical mechanics. The idea that our "most fundamental" theory might not contain all the other "less fundamental" theories is anathema to most physicists, but it doesn't bother me since I don't believe in the idea of a final theory anyway. In any case, from the perspective of a non-science student I think blaming the whole mess on quantum fields is a bit like saying the Wazzleblatchet did it (which might be great for a Dr. Seuss tale, but not in my physics class).
Now this isn't to say that I side fully with Mermin on this debate. I have some issues with the idea that quantum states are "nothing more" than states of knowledge. As I said above, we use classical probabilities to represent states of knowledge. But clearly there is something different going on in quantum mechanics. So if quantum states are states of knowledge then our knowledge about quantum particles is constrained in some rather odd ways. In this sense, saying that quantum states are states of knowledge does little to dispel the mysteries of quantum mechanics. I'm not sure that is really Mermin's goal. His goal in the review was to combat the idea that consciousness brings about physical changes in some objectively real quantum state. If quantum states are states of knowledge then it is no surprise that the quantum state changes when a conscious entity becomes aware of a measurement result. But even this point of view does not wholly discount the idea that there IS an objective physical reality. I personally view any science (not just quantum mechanics) as the result of an interplay between our minds and an objective physical reality. Neither piece is wholly absent from classical physics, nor from quantum physics.
Now I have a great deal of admiration for both of the participants here. I am in the process of reading Mermin's Boojums All the Way Through. Mermin is without question the best prose stylist in physics (and apparently a major contributor in condensed matter physics, though that's not my field so I can hardly judge). Hobson, on the other hand, has been a champion for the social relevance of physics and for the teaching of physics to non-science students. I use his Physics: Concepts and Conncetions textbook for my liberal-arts physics course. While I would have read any letters on interpreting quantum mechanics with interest, the name recognition definitely made these letters stand out to me.
Hobson claims the view that quantum states are states of knowledge rather than states of some objective physical reality is an unnecessary extravagance. He argues that the analysis should really be done from the perspective of quantum field theory, and that most physicists certainly believe that quantum fields are objectively real (offering a quote from Weinberg that I have seen him use before). He then goes on to explain how decoherence explains how a quantum superposition can be transformed, through interaction between the quantum system and its environment, into an incoherent state that can be described with a diagonal density operator. Hobson then declares that these incoherent states are no more mysterious than the proposition that there is a 0.5 probability that a coin flip will come up heads.
I find this last comment by Hobson particularly interesting in light of the position he is attacking. He wants to avoid the claim that quantum states are states of knowledge, but yet he is reduced to saying that quantum probabilities are just like the probabilities involved in flipping a coin. But classical probabilities, like those for a coin toss, are invoked exactly because we lack knowledge. The equal probability of getting heads or tails when a coin is flipped does not represent anything objectively real about the state of the coin on a given flip. What it represents is the state of our knowledge about the coin. If we knew a great deal more about the coins initial state, and about all the forces that act upon the coin, we could determine with near certainty which side of the coin would land up. It is only because we are ignorant of all this information that we must resort to probabilities. So Hobson's invocation of decoherence seems to support Mermin's view, rather than refute it. Indeed, decoherence can only be deemed to have fully solved the measurement problem if quantum states are only states of knowledge (because it reduces the quantum superposition to a classical mixture). If we believe that quantum states represent an objective reality then we are left wondering why decoherence fails to produce a single outcome (rather than a classical mixture of various outcomes). Certainly when we do measurements in the lab we get a single outcome each time (though not the same outcome every time we repeat the measurement).
I also find Hobson's reliance on quantum field theory to be a little problematic. Not so much for technical reasons as for pedagogical reasons. In fact, I have avoided moving to the new edition of his text in part because of this. It is not clear to me that all of the mysteries of quantum mechanics can be swept under the rug of quantum field theory. Quantum field theory has been very successful in describing a rather limited range of phenomena. But it's not clear to me that quantum field theory, as a model of physical interactions, completely contains and therefore exceeds non-relativistic quantum mechanics. In the same way, I have yet to be fully convinced that quantum mechanics completely contains classical mechanics. The idea that our "most fundamental" theory might not contain all the other "less fundamental" theories is anathema to most physicists, but it doesn't bother me since I don't believe in the idea of a final theory anyway. In any case, from the perspective of a non-science student I think blaming the whole mess on quantum fields is a bit like saying the Wazzleblatchet did it (which might be great for a Dr. Seuss tale, but not in my physics class).
Now this isn't to say that I side fully with Mermin on this debate. I have some issues with the idea that quantum states are "nothing more" than states of knowledge. As I said above, we use classical probabilities to represent states of knowledge. But clearly there is something different going on in quantum mechanics. So if quantum states are states of knowledge then our knowledge about quantum particles is constrained in some rather odd ways. In this sense, saying that quantum states are states of knowledge does little to dispel the mysteries of quantum mechanics. I'm not sure that is really Mermin's goal. His goal in the review was to combat the idea that consciousness brings about physical changes in some objectively real quantum state. If quantum states are states of knowledge then it is no surprise that the quantum state changes when a conscious entity becomes aware of a measurement result. But even this point of view does not wholly discount the idea that there IS an objective physical reality. I personally view any science (not just quantum mechanics) as the result of an interplay between our minds and an objective physical reality. Neither piece is wholly absent from classical physics, nor from quantum physics.
Saturday, September 29, 2007
Physics is a Liberal Art!
In my introductory essay for this blog, I argued that physics is a liberal art. I'd like to spend a little time making a stronger case for that argument. It seems to have become commonplace for people to equate the liberal arts to the humanities (and perhaps even to only certain disciplines within the humanities). The seven traditional liberal arts were rhetoric, grammar, logic, geometry, arithmetic, music, and astronomy. Of these it is easy to associate rhetoric and grammar with, say, a major in English (though English professors would cringe at the idea that they primarily teach rhetoric and grammar - of course, their focus is on literary criticism). Similarly, one can associate logic with philosophy and music with the fine arts in general. So there is no doubt that there is a big overlap between the traditional liberal arts and the humanities. But what about geometry, arithmetic, and astronomy? I see little choice but to equate these with the modern study of mathematics and science. Granted, a modern mathematics major will spend little time studying geometry (and hopefully none studying arithmetic, including "college algebra", which they should already know), just as English majors don't spend much time on grammar. Still, there can be no doubt that mathematics and science were very much a part of the traditional liberal arts.
Of course, one can argue that the term simply means something different now. But what did the term mean in classical and medieval times? It referred to areas of knowledge that were appropriate for free men, as opposed to more applied areas of knowledge that might be appropriate for slaves or serfs. So if we take the term to mean the same thing today (knowledge appropriate for free persons), then what should the liberal arts be in today's context? I have no easy answer for that, but I am absolutely certain that science must be a part of it. A free person in modern society must have a basic understanding of the methods of science, and at least some rudimentary scientific content knowledge. Why? Because science and its by-products pervade every aspect of modern society. Science drives our economies and has helped produce a worldview that is conducive to the modern democratic state (i.e. with the concept of universal natural laws). Those without any knowledge of science in today's world are in a dangerous situation because they can be easily controlled and manipulated by those who do understand science (and often by those who don't - just look at some political rhetoric and advertisements for pharmaceuticals). I hope no one would argue with the notion that all free persons should know how to read, write, and perform basic mathematical manipulations. I would place science right after these on the list of things a free person should know.
Now, I don't mean that every free person needs to major in science at the college level. Hardly. But a free person should possess a basic understanding of the methods of science, and some ability to distinguish science from psuedoscience and from that which is simply not science. Unfortunately, students who take science courses at the college level are often given an "introduction to the discipline" that focuses on content rather than methodology. These courses might make it seem like the purpose of studying science is solely to become a scientist. This makes the sciences seem more like applied disciplines rather than intellectual disciplines appropriate for all free persons. I think science should be taught as the liberal art that it is, rather than as vocational training. This is imperative for non-science majors who may take only one or two science courses. I am becoming increasingly convinced that we should also teach courses for science majors this way, at least at the introductory level. If more advanced courses take on a more "vocational" or "professional" feel then that may be appropriate once students have an understanding of the basic methodology of science.
One last thought on why science is a liberal art: science is a liberal art because people pursue science for the same reason that people pursue other liberal arts. Most English majors do not study English because they sense it will land them a high-paying job one day. Most Fine Arts majors don't view their education as preparation for a lucrative career as a painter, etc. But neither do physics majors study physics because it will get them a good job. Most of us study physics for the same reason that people write poetry: because it brings us joy. Doing physics is fun (at least, it is for me). Physics, and the other sciences, are very intellectually stimulating. Now perhaps this can be said of anything. I've talked with some marketing professors who make the study of marketing sound enjoyable and intellectually stimulating. But most students who study marketing probably do so because they want to get a job in that field. Physics students don't tend to think that way. Many physics majors go on to grad school, but I think this is primarily because they enjoy studying physics and they want to keep doing so. Others get jobs straight away, more often that not outside the field of physics. And that's fine, because they didn't major in physics as preparation for a specific job. They majored in physics because it challenged their mind, deepened their reasoning skills, improved their understanding of nature, and honed their mathematical ability. I believe the same is true for other liberal arts like English or History. They don't really serve to prepare you for a specific career (unless you want to teach), but they provide you with a set of intellectual skills that can enrich your life and make you capable of meeting almost any challenge.
I'd like to see the sciences receive recognition as liberal arts. I think the Humanities folks need to acknowledge science's rightful place among the liberal arts. I likewise think that quite a few scientists need to stop scoffing at the liberal arts and start recognizing that their own subject is as much a liberal art as History or Philosophy. That doesn't mean we can't continue to recognize certain boundaries between disciplines. Joyce's Ulysses is not science, any more than Einstein's "On the Electrodynamics of Moving Bodies" is literature. But both should be recognized as the great intellectual achievements they are. How much poorer we would be if we had only science, or only literature, but not both!
Of course, one can argue that the term simply means something different now. But what did the term mean in classical and medieval times? It referred to areas of knowledge that were appropriate for free men, as opposed to more applied areas of knowledge that might be appropriate for slaves or serfs. So if we take the term to mean the same thing today (knowledge appropriate for free persons), then what should the liberal arts be in today's context? I have no easy answer for that, but I am absolutely certain that science must be a part of it. A free person in modern society must have a basic understanding of the methods of science, and at least some rudimentary scientific content knowledge. Why? Because science and its by-products pervade every aspect of modern society. Science drives our economies and has helped produce a worldview that is conducive to the modern democratic state (i.e. with the concept of universal natural laws). Those without any knowledge of science in today's world are in a dangerous situation because they can be easily controlled and manipulated by those who do understand science (and often by those who don't - just look at some political rhetoric and advertisements for pharmaceuticals). I hope no one would argue with the notion that all free persons should know how to read, write, and perform basic mathematical manipulations. I would place science right after these on the list of things a free person should know.
Now, I don't mean that every free person needs to major in science at the college level. Hardly. But a free person should possess a basic understanding of the methods of science, and some ability to distinguish science from psuedoscience and from that which is simply not science. Unfortunately, students who take science courses at the college level are often given an "introduction to the discipline" that focuses on content rather than methodology. These courses might make it seem like the purpose of studying science is solely to become a scientist. This makes the sciences seem more like applied disciplines rather than intellectual disciplines appropriate for all free persons. I think science should be taught as the liberal art that it is, rather than as vocational training. This is imperative for non-science majors who may take only one or two science courses. I am becoming increasingly convinced that we should also teach courses for science majors this way, at least at the introductory level. If more advanced courses take on a more "vocational" or "professional" feel then that may be appropriate once students have an understanding of the basic methodology of science.
One last thought on why science is a liberal art: science is a liberal art because people pursue science for the same reason that people pursue other liberal arts. Most English majors do not study English because they sense it will land them a high-paying job one day. Most Fine Arts majors don't view their education as preparation for a lucrative career as a painter, etc. But neither do physics majors study physics because it will get them a good job. Most of us study physics for the same reason that people write poetry: because it brings us joy. Doing physics is fun (at least, it is for me). Physics, and the other sciences, are very intellectually stimulating. Now perhaps this can be said of anything. I've talked with some marketing professors who make the study of marketing sound enjoyable and intellectually stimulating. But most students who study marketing probably do so because they want to get a job in that field. Physics students don't tend to think that way. Many physics majors go on to grad school, but I think this is primarily because they enjoy studying physics and they want to keep doing so. Others get jobs straight away, more often that not outside the field of physics. And that's fine, because they didn't major in physics as preparation for a specific job. They majored in physics because it challenged their mind, deepened their reasoning skills, improved their understanding of nature, and honed their mathematical ability. I believe the same is true for other liberal arts like English or History. They don't really serve to prepare you for a specific career (unless you want to teach), but they provide you with a set of intellectual skills that can enrich your life and make you capable of meeting almost any challenge.
I'd like to see the sciences receive recognition as liberal arts. I think the Humanities folks need to acknowledge science's rightful place among the liberal arts. I likewise think that quite a few scientists need to stop scoffing at the liberal arts and start recognizing that their own subject is as much a liberal art as History or Philosophy. That doesn't mean we can't continue to recognize certain boundaries between disciplines. Joyce's Ulysses is not science, any more than Einstein's "On the Electrodynamics of Moving Bodies" is literature. But both should be recognized as the great intellectual achievements they are. How much poorer we would be if we had only science, or only literature, but not both!
Saturday, September 15, 2007
The Scope of a Scientific Theory
In this essay I want to follow up my discussion of Lakatos' conception of scientific research programmes by describing Thomas Brody's conception of the scope of scientific theories. Brody's views, as set forth in his The Philosophy Behind Physics (a book which he did not complete before his death, and which includes some essays written by him that were never intended for the book), seem to have been largely ignored by the philosophy of science community. It may be that his ideas are fundamentally flawed, and have been ignored for good reason. I'm not enough of an expert to judge that. However, I find his concept of scope quite compelling. In particular, it seems to me that Brody's approach can be viewed as another way of keeping Popper's basic approach to scientific methodology while simultaneously addressing some of the problems that beset Popper's views (see my previous essay for a brief discussion of some of these problems).
Brody's understanding of scientific progress sees the evaluation of scientific theories as divided into two stages. In the first stage, a nascent theory gains support by accumulating confirming evidence (corroboration, in Popper's terminology). A newly proposed theory that fails to be supported by empirical evidence will likely be discarded. However, once a theory has survived this early stage it moves on to a phase in which the focus is on trying to find situations in which the theory fails (just as in Popper's falsification approach). The purpose of finding these failures, though, is not to falsify the theory in anything like an absolute sense. The theory will not be discarded simply because a few failures occur. Rather, these failures of the theory are used to delimit the theory's scope.
The scope of a theory, as Brody presents it, is something like the set of circumstances in which the theory will produce successful predictions. This definition, though, is probably too vague. If the successes and failures of the theory follow no apparent pattern then it is probably impossible to define the scope of that theory. But a theory's scope can become well defined if we can translate the "circumstances" in which the theory is used into something like a parameter space. If we then find that the theory produces successful predictions for some region of this parameter space, but fails to produce successful predictions outside this region, then the region of success effectively defined the scope of the theory. Brody seems to assume in his writing that we should expect theories to behave in this way. He does not address pathological cases in which the points in parameter space at which the theory is successful are intimately intermixed with the points at which the theory fails. I think this is because his concept of scope is largely derived from his view that all theories are approximations (see my essay on approximations in physics for my take on this). Mathematical approximations (such as truncating a Taylor series, for example) are generally valid for some range of values for the relevant variables and invalid outside of that range. Brody seems to think the scope of a theory can be determined in much the same way.
Now I find this idea compelling because it avoids the assumption that I have claimed lies at the heart of naive falsificationism, namely that there is one "true" theory that is capable of predicting everything in the Universe. Brody's sees scientific theories as human inventions, inventions that can at best approximate reality. Science, from this point of view, is not about pursuing absolute truth but about finding approximate truths and understanding when those approximate truths hold and when they do not. Brody finds it perfectly acceptable to have one theory that describes a phenomenon in a certain parameter range and another logically incompatible theory that describes the same phenomenon in a different parameter range. He discusses the various models used in nuclear physics in this context. It is possible that one could even view wave-particle duality (of photons, electrons, etc.) in this way, although it is not clear how one could define parameters such that wave behavior manifests for one range of parameter values and particle behavior for a different range.
Another reason I find Brody's idea compelling is that it seems to reflect some important parts of scientific history. When a well-established theory is falsified, historically scientists have not tossed the theory aside and moved on to something else. Certainly, there is a desire to formulate a new theory that will work where the previous theory failed. But quite often the "falsified" theory is kept on. If one can clearly determine the scope of the theory then it is rational to continue using the old theory in those situations which fall within its scope. Classical Newtonian mechanics is an excellent example of this. Newtonian mechanics has been falsified over and over, yet we have not tossed it aside (I teach a two-semester sequence on intermediate classical mechanics and I don't think I am wasting my student's time!). We still use Newtonian mechanics in those situations where we are confident that it will work. There may be a sense in which physicists are convinced that quantum mechanics and relativity are "truer" than Newtonian mechanics, and that we are only still willing to use Newtonian mechanics because it accurately approximates those "truer" theories in certain situations. But in the case of quantum mechanics, showing that the "truer" theory reduces to Newtonian mechanics in the appropriate circumstances has proved to be challenging (particularly in the case of chaotic systems). The same may be true for general relativity, although I know much less about that case. I think Brody would claim that we need not worry so much about this issue. As long as we know when it is okay to use Newtonian mechanics, then it is fine for us to do so. We don't have to convince ourselves that we are really using quantum mechanics and general relativity in an approximate form.
Now I think that the concept of scope helps resolves some of the problems associated with naive falsificationism, but it certainly doesn't settle all of them. In particular, it seems to suffer at the hands of the Duhem-Quine thesis. If a theory fails to predict the results of an experiment, how can we be sure that the experiment is outside the theory's scope? It could be that the failure is due to an auxiliary hypothesis (thus indicating that the experiment is outside the scope of that auxiliary hypothesis). So just as we can never falsify a theory in isolation, we can never determine the scope of a theory in isolation. We can only determine the scope of an entire network of theories that are used to predict the results of an experiment (and to interpret the experimental data). Another way to state this is that when we test a theory we must inevitably assume the validity of several other theories in the process. This assumption may prove to be correct, or it may not. Whenever we get a negative result, it could be a failure of the theory we are testing or it could be a failure of the assumptions we have made. This makes determining the scope of a theory a complicated process. In practice we must evaluate many theories at once and any failure signifies that we are outside the scope of at least one of the theories. Delimiting the scope of a set of theories thus becomes and endless process of cross-checking. So Brody's view faces some serious challenges - but I think it deserves more attention than it has received.
I'd like to close this essay by trying to tease out some similarities between the approaches of Lakatos and Brody. Both seem to build on Popper's basic premise. Both avoid inductivist ideas. Both attempt to defend the rationality of science (contra Kuhn and Feyerabend, etc.). I think one could even reformulate Lakatos' ideas using Brody's language. When we perform an empirical test of a theory we are really testing a whole network of theories and assumptions. However, based on the details of the experiment we may have different levels of confidence in the various theories that compose the network. We may be very confident that we are well within the scope of many of these theories/assumptions, and therefore we would be very unlikely to blame any failure on these parts of the network. The theories or assumptions in this group would form the "hard core" in Lakatos' terminology. On the other hand, we may be less certain about the where the experiment falls in relation to the scope of other theories and assumptions in the network. We would be much more likely to blame a failed prediction on one of these theories/assumptions. This group of theories and assumptions then forms the "protective belt". This represents a significant change in Lakatos' conception (at least, as far as I understand it) because now theories could move between the hard core and the protective belt depending on the context of the experiment. I think this is a step in the right direction because it provides some much-needed flexibility. In particular, it opens up the door for falsifying (or at least delimiting the scope of) those theories which are part of the hard core. If a theory that is in the hard core is always in the hard core then it would seem to be unfalsifiable, and thus it would become a metaphysical principle or a convention rather than a physical theory. Yet, this idea does allow for the possibility that some theories (or principles, or whatever) could have universal scope and could therefore be "permanent members" of the hard core.
I have actually used Brody's concept of scope in teaching students about the nature of science. I have them perform an experiment to determine the relation between the period of a pendulum's oscillations and it's length. They consider two mathematical models: one in which the period is a linear function of length, and one in which the square of the period is a linear function of length. They generally find that both models work well to fit their data. They then use each model to predict the period of a 16-meter pendulum, and then they actually measure the period of such a pendulum (we have a 16-m Foucault pendulum in our lobby). They find that the second model's prediction is reasonably close, while the first model is way off. We could consider this a falsification of the first model, but I try to lead them toward a different conclusion: that we have really just shown that long pendulums lie outside the scope of the first model. In fact, if we made the pendulum VERY long (say, a significant fraction of Earth's radius) then we would find the second model would fail as well. The basic idea is that all models have a finite scope, so a failure of a model doesn't mean we should discard it or else we would discard everything. However, in evaluating between two models we may find that the scope of one completely encloses but extends beyond the scope of the other. In that case we would clearly prefer the model that has the wider scope. On the other hand, if the two models had scopes that overlapped only partially or else did not overlap at all then it would be quite reasonable to keep both models around so that we can use the model which is most appropriate for the prediction we are trying to make.
Brody's understanding of scientific progress sees the evaluation of scientific theories as divided into two stages. In the first stage, a nascent theory gains support by accumulating confirming evidence (corroboration, in Popper's terminology). A newly proposed theory that fails to be supported by empirical evidence will likely be discarded. However, once a theory has survived this early stage it moves on to a phase in which the focus is on trying to find situations in which the theory fails (just as in Popper's falsification approach). The purpose of finding these failures, though, is not to falsify the theory in anything like an absolute sense. The theory will not be discarded simply because a few failures occur. Rather, these failures of the theory are used to delimit the theory's scope.
The scope of a theory, as Brody presents it, is something like the set of circumstances in which the theory will produce successful predictions. This definition, though, is probably too vague. If the successes and failures of the theory follow no apparent pattern then it is probably impossible to define the scope of that theory. But a theory's scope can become well defined if we can translate the "circumstances" in which the theory is used into something like a parameter space. If we then find that the theory produces successful predictions for some region of this parameter space, but fails to produce successful predictions outside this region, then the region of success effectively defined the scope of the theory. Brody seems to assume in his writing that we should expect theories to behave in this way. He does not address pathological cases in which the points in parameter space at which the theory is successful are intimately intermixed with the points at which the theory fails. I think this is because his concept of scope is largely derived from his view that all theories are approximations (see my essay on approximations in physics for my take on this). Mathematical approximations (such as truncating a Taylor series, for example) are generally valid for some range of values for the relevant variables and invalid outside of that range. Brody seems to think the scope of a theory can be determined in much the same way.
Now I find this idea compelling because it avoids the assumption that I have claimed lies at the heart of naive falsificationism, namely that there is one "true" theory that is capable of predicting everything in the Universe. Brody's sees scientific theories as human inventions, inventions that can at best approximate reality. Science, from this point of view, is not about pursuing absolute truth but about finding approximate truths and understanding when those approximate truths hold and when they do not. Brody finds it perfectly acceptable to have one theory that describes a phenomenon in a certain parameter range and another logically incompatible theory that describes the same phenomenon in a different parameter range. He discusses the various models used in nuclear physics in this context. It is possible that one could even view wave-particle duality (of photons, electrons, etc.) in this way, although it is not clear how one could define parameters such that wave behavior manifests for one range of parameter values and particle behavior for a different range.
Another reason I find Brody's idea compelling is that it seems to reflect some important parts of scientific history. When a well-established theory is falsified, historically scientists have not tossed the theory aside and moved on to something else. Certainly, there is a desire to formulate a new theory that will work where the previous theory failed. But quite often the "falsified" theory is kept on. If one can clearly determine the scope of the theory then it is rational to continue using the old theory in those situations which fall within its scope. Classical Newtonian mechanics is an excellent example of this. Newtonian mechanics has been falsified over and over, yet we have not tossed it aside (I teach a two-semester sequence on intermediate classical mechanics and I don't think I am wasting my student's time!). We still use Newtonian mechanics in those situations where we are confident that it will work. There may be a sense in which physicists are convinced that quantum mechanics and relativity are "truer" than Newtonian mechanics, and that we are only still willing to use Newtonian mechanics because it accurately approximates those "truer" theories in certain situations. But in the case of quantum mechanics, showing that the "truer" theory reduces to Newtonian mechanics in the appropriate circumstances has proved to be challenging (particularly in the case of chaotic systems). The same may be true for general relativity, although I know much less about that case. I think Brody would claim that we need not worry so much about this issue. As long as we know when it is okay to use Newtonian mechanics, then it is fine for us to do so. We don't have to convince ourselves that we are really using quantum mechanics and general relativity in an approximate form.
Now I think that the concept of scope helps resolves some of the problems associated with naive falsificationism, but it certainly doesn't settle all of them. In particular, it seems to suffer at the hands of the Duhem-Quine thesis. If a theory fails to predict the results of an experiment, how can we be sure that the experiment is outside the theory's scope? It could be that the failure is due to an auxiliary hypothesis (thus indicating that the experiment is outside the scope of that auxiliary hypothesis). So just as we can never falsify a theory in isolation, we can never determine the scope of a theory in isolation. We can only determine the scope of an entire network of theories that are used to predict the results of an experiment (and to interpret the experimental data). Another way to state this is that when we test a theory we must inevitably assume the validity of several other theories in the process. This assumption may prove to be correct, or it may not. Whenever we get a negative result, it could be a failure of the theory we are testing or it could be a failure of the assumptions we have made. This makes determining the scope of a theory a complicated process. In practice we must evaluate many theories at once and any failure signifies that we are outside the scope of at least one of the theories. Delimiting the scope of a set of theories thus becomes and endless process of cross-checking. So Brody's view faces some serious challenges - but I think it deserves more attention than it has received.
I'd like to close this essay by trying to tease out some similarities between the approaches of Lakatos and Brody. Both seem to build on Popper's basic premise. Both avoid inductivist ideas. Both attempt to defend the rationality of science (contra Kuhn and Feyerabend, etc.). I think one could even reformulate Lakatos' ideas using Brody's language. When we perform an empirical test of a theory we are really testing a whole network of theories and assumptions. However, based on the details of the experiment we may have different levels of confidence in the various theories that compose the network. We may be very confident that we are well within the scope of many of these theories/assumptions, and therefore we would be very unlikely to blame any failure on these parts of the network. The theories or assumptions in this group would form the "hard core" in Lakatos' terminology. On the other hand, we may be less certain about the where the experiment falls in relation to the scope of other theories and assumptions in the network. We would be much more likely to blame a failed prediction on one of these theories/assumptions. This group of theories and assumptions then forms the "protective belt". This represents a significant change in Lakatos' conception (at least, as far as I understand it) because now theories could move between the hard core and the protective belt depending on the context of the experiment. I think this is a step in the right direction because it provides some much-needed flexibility. In particular, it opens up the door for falsifying (or at least delimiting the scope of) those theories which are part of the hard core. If a theory that is in the hard core is always in the hard core then it would seem to be unfalsifiable, and thus it would become a metaphysical principle or a convention rather than a physical theory. Yet, this idea does allow for the possibility that some theories (or principles, or whatever) could have universal scope and could therefore be "permanent members" of the hard core.
I have actually used Brody's concept of scope in teaching students about the nature of science. I have them perform an experiment to determine the relation between the period of a pendulum's oscillations and it's length. They consider two mathematical models: one in which the period is a linear function of length, and one in which the square of the period is a linear function of length. They generally find that both models work well to fit their data. They then use each model to predict the period of a 16-meter pendulum, and then they actually measure the period of such a pendulum (we have a 16-m Foucault pendulum in our lobby). They find that the second model's prediction is reasonably close, while the first model is way off. We could consider this a falsification of the first model, but I try to lead them toward a different conclusion: that we have really just shown that long pendulums lie outside the scope of the first model. In fact, if we made the pendulum VERY long (say, a significant fraction of Earth's radius) then we would find the second model would fail as well. The basic idea is that all models have a finite scope, so a failure of a model doesn't mean we should discard it or else we would discard everything. However, in evaluating between two models we may find that the scope of one completely encloses but extends beyond the scope of the other. In that case we would clearly prefer the model that has the wider scope. On the other hand, if the two models had scopes that overlapped only partially or else did not overlap at all then it would be quite reasonable to keep both models around so that we can use the model which is most appropriate for the prediction we are trying to make.
Saturday, September 8, 2007
A First Response to Naive Falsificationism
I've just finished reading Karl Popper's Logic of Scientific Discovery (including all of the "starred" appendices), so now is as good a time as any to write about two responses to Popper's ideas that have been on my mind lately. Let me start by giving a caricature of Popper's doctrine of falsificationism. The basic idea is that scientific theories are bold conjectures about the physical world, which must then be subject to stringent empirical tests. Any theory which fails such a test must be discarded, and a new bold conjecture must be put forth. This notion of the methodology of science avoids many of the pitfalls of inductivist thinking (such as the verifcationist approach of the logical postivists), but it suffers from its own set of problems. I'll list some of the more glaring ones here. First of all, it would be unwise to throw out a theory which has failed an empirical test unless there is a comparably good theory available that has yet to fail any empirical tests. In other words, just because a theory is "wrong" doesn't mean it is not better than nothing. Secondly, any empirical test will test not just a single theory but rather a whole set of theories (which may or may not be related to each other) along with various other assumptions and approximations that must be used to relate the raw empirical data to the prediction made by the theory. If the prediction fails to match the empirical data, it is not at all clear that this must be the fault of the theory that is supposedly being tested. Finally (for this essay - this is by no means a comprehensive list of problems with Popper's doctrine), naive falsificationism contradicts the history of science. Every theory that we hold today has been "falsified" at some point, and it seems likely that any future theory will suffer the same fate. Clearly we are not willing to throw out all of our current science because a few contradictions have been found.
One thing that was interesting for me in reading Logic of Scientific Discovery is that Popper seems to have been quite aware of most, if not all, of these issues. In fact, he really says very little about falsification itself. His main concern is to define falsifiability, and to show how the falsifiability of a theory can be used as a criterion for demarcation between science and pseudoscience. It seems clear from reading the book that Popper understood falsification was not a simple matter. Regardless of what Popper thought, I'd like to analyze a basic assumption that I think would have to underly the naive falsificationist view (if there was anyone who actually held this view). The assumption is this: that there is a universally correct theory, a theory which would deliver predictions that would be correct in all circumstances. If this were true, then any theory which made a prediction that failed would clearly not be that universally correct theory and could thus rationally be discarded in hopes of reaching that "final theory." (Of course, this assumes that our empirical test was valid and that all the information that was used, along with our theory, to make the predictions was accurate.) I have argued in a previous essay that I believe such a final theory is impossible. All theories are, I believe, approximations and all approximations are valid only in certain situations.
I would like to discuss two attempts to remedy some of the problems with naive falsificationism. (I'm sure there have been attempts other than these, and certainly many philosophers of science have been quite content to discard Popper's ideas altogether, but I'm going to focus on these two attempts to retain Popper's basic approach.) The first approach I want to discuss is well known: Imre Lakatos' "methodology of scientific research programmes." The second is, I think, less well known: Thomas Brody's concept of the "scope" of scientific theories. Actually, I am personally more familiar with Brody's ideas because I have read his work, while I have only read descriptions of Lakatos' ideas written by others (I'll fix that soon, I hope - I have at least a few of Lakatos' key papers in my library). I'll describe Lakatos' idea in the remainder of this essay. In my next essay I'll describe Brody's approach and try to analyze the similarities and differences between these two views.
Lakatos conceives of research programmes as conglomerations of theories and assumptions. As noted above, any empirical test cannot test a single theory but rather tests a whole group of theories (this is often referred to as the "Duhem-Quine thesis"). Lakatos realized that in the face of this problem scientists must make a decision about which piece of the collection of theories and assumptions that are relevant to the empirical test gets falsified and which pieces are still considered valid. Lakatos proposes that each research programme has a "hard core" that is considered almost sacred. The hard core consists of the fundamental theories that scientists will simply refuse to abandon, even in the face of an apparent falsification. Surrounding this hard core of theories is the "protective belt." The protective belt consists of lesser theories and a wide variety of assumptions to which scientists feel less compelled to cling. If a prediction fails to accord with the empirical test then this failure will be blamed on some element of the protective belt rather than on one of the theories that comprises the hard core. Thus the protective belt serves to shield the hard core from falsification. (Note that this is exactly contrary to what Popper says we ought to do with our theories, since he claims we should NEVER shield them from falsification but rather subject them to the most extreme tests.) This is somewhat similar to Poincare's distinction between principles and laws. Principles, for Poincare, are true by convention and cannot be subject to empirical tests, while laws are empirically testable.
Lakatos' approach does appear to solve the problems with naive falsification that were mentioned above. It explains why, for example, we did not abandon Newtonian mechanics and gravitation when the predicted location of the planet Uranus was found to disagree with observations. Newton's theories were part of the hard core (at the time), while our knowledge of the objects that populate the solar system was part of the protective belt. Scientists were more willing to postulate an unknown planet than they were to modify or discard Newtonian theory. This turned out to be a wise choice, and the discovery of Neptune soon followed. Nevertheless, there are some problems with Lakatos' ideas as well. It seem very problematic that we should protect our theories from falsification. Indeed, how can theories that reside in the hard core ever be found false (if indeed they are so). I suppose one can claim that repeated falsifications, even when the protective belt has been modified to account for past problems, might begin to weaken the hard core and eventually some portions of the hard core might "slip" into the protective belt. Perhaps the rise of quantum mechanics can be viewed in this way, although this view does not seem to apply to the development of relativity (Michelson and Morley provided the repeated falsification, but it seems that Einstein did not even consider this in developing his special theory).
There is one issue I have wondered about, and which Lakatos may have addressed although I am unaware of it. This issue is this: must the hard core and protective belt be fixed at any given time. Or can theories and assumptions move from the hard core to the protective belt, and vice versa, depending on what kind of prediction we are making. I discussed in my previous essay how the validity of approximations depends on the use to which they are put. This seems relevant to the idea of defining a hard core as well. In certain contexts we might feel that a theory is to be regarded as unquestionably true, while in another context we might consider the theory subject to falsification. One could look at Newtonian mechanics that way. I think any physicist would be unwilling to accept that a cannonball moved in a way that contradicted Newtonian mechanics. On the other hand, the same physicist would quite willingly admit that Newtonian mechanics is falsified by the motion of an electron. Now if one takes the view that science should strive for a final theory that works in all possible contexts, then this idea wouldn't make much sense. If falsification in any context implies total falsification (i.e. we declare the theory "wrong" and discard it) then it would not make sense to consider a theory falsifiable in certain situations but not in others. However, if we accept that scientific theories must always be approximate and provisional then one might be willing to move a theory from the hard core to the protective belt (or vice versa) based on the context of the prediction.
I'll leave my discussion of Lakatos' research programmes for now, because I think Brody's concept of scope can illuminate some of these issues. I'll tackle that in my next essay.
One thing that was interesting for me in reading Logic of Scientific Discovery is that Popper seems to have been quite aware of most, if not all, of these issues. In fact, he really says very little about falsification itself. His main concern is to define falsifiability, and to show how the falsifiability of a theory can be used as a criterion for demarcation between science and pseudoscience. It seems clear from reading the book that Popper understood falsification was not a simple matter. Regardless of what Popper thought, I'd like to analyze a basic assumption that I think would have to underly the naive falsificationist view (if there was anyone who actually held this view). The assumption is this: that there is a universally correct theory, a theory which would deliver predictions that would be correct in all circumstances. If this were true, then any theory which made a prediction that failed would clearly not be that universally correct theory and could thus rationally be discarded in hopes of reaching that "final theory." (Of course, this assumes that our empirical test was valid and that all the information that was used, along with our theory, to make the predictions was accurate.) I have argued in a previous essay that I believe such a final theory is impossible. All theories are, I believe, approximations and all approximations are valid only in certain situations.
I would like to discuss two attempts to remedy some of the problems with naive falsificationism. (I'm sure there have been attempts other than these, and certainly many philosophers of science have been quite content to discard Popper's ideas altogether, but I'm going to focus on these two attempts to retain Popper's basic approach.) The first approach I want to discuss is well known: Imre Lakatos' "methodology of scientific research programmes." The second is, I think, less well known: Thomas Brody's concept of the "scope" of scientific theories. Actually, I am personally more familiar with Brody's ideas because I have read his work, while I have only read descriptions of Lakatos' ideas written by others (I'll fix that soon, I hope - I have at least a few of Lakatos' key papers in my library). I'll describe Lakatos' idea in the remainder of this essay. In my next essay I'll describe Brody's approach and try to analyze the similarities and differences between these two views.
Lakatos conceives of research programmes as conglomerations of theories and assumptions. As noted above, any empirical test cannot test a single theory but rather tests a whole group of theories (this is often referred to as the "Duhem-Quine thesis"). Lakatos realized that in the face of this problem scientists must make a decision about which piece of the collection of theories and assumptions that are relevant to the empirical test gets falsified and which pieces are still considered valid. Lakatos proposes that each research programme has a "hard core" that is considered almost sacred. The hard core consists of the fundamental theories that scientists will simply refuse to abandon, even in the face of an apparent falsification. Surrounding this hard core of theories is the "protective belt." The protective belt consists of lesser theories and a wide variety of assumptions to which scientists feel less compelled to cling. If a prediction fails to accord with the empirical test then this failure will be blamed on some element of the protective belt rather than on one of the theories that comprises the hard core. Thus the protective belt serves to shield the hard core from falsification. (Note that this is exactly contrary to what Popper says we ought to do with our theories, since he claims we should NEVER shield them from falsification but rather subject them to the most extreme tests.) This is somewhat similar to Poincare's distinction between principles and laws. Principles, for Poincare, are true by convention and cannot be subject to empirical tests, while laws are empirically testable.
Lakatos' approach does appear to solve the problems with naive falsification that were mentioned above. It explains why, for example, we did not abandon Newtonian mechanics and gravitation when the predicted location of the planet Uranus was found to disagree with observations. Newton's theories were part of the hard core (at the time), while our knowledge of the objects that populate the solar system was part of the protective belt. Scientists were more willing to postulate an unknown planet than they were to modify or discard Newtonian theory. This turned out to be a wise choice, and the discovery of Neptune soon followed. Nevertheless, there are some problems with Lakatos' ideas as well. It seem very problematic that we should protect our theories from falsification. Indeed, how can theories that reside in the hard core ever be found false (if indeed they are so). I suppose one can claim that repeated falsifications, even when the protective belt has been modified to account for past problems, might begin to weaken the hard core and eventually some portions of the hard core might "slip" into the protective belt. Perhaps the rise of quantum mechanics can be viewed in this way, although this view does not seem to apply to the development of relativity (Michelson and Morley provided the repeated falsification, but it seems that Einstein did not even consider this in developing his special theory).
There is one issue I have wondered about, and which Lakatos may have addressed although I am unaware of it. This issue is this: must the hard core and protective belt be fixed at any given time. Or can theories and assumptions move from the hard core to the protective belt, and vice versa, depending on what kind of prediction we are making. I discussed in my previous essay how the validity of approximations depends on the use to which they are put. This seems relevant to the idea of defining a hard core as well. In certain contexts we might feel that a theory is to be regarded as unquestionably true, while in another context we might consider the theory subject to falsification. One could look at Newtonian mechanics that way. I think any physicist would be unwilling to accept that a cannonball moved in a way that contradicted Newtonian mechanics. On the other hand, the same physicist would quite willingly admit that Newtonian mechanics is falsified by the motion of an electron. Now if one takes the view that science should strive for a final theory that works in all possible contexts, then this idea wouldn't make much sense. If falsification in any context implies total falsification (i.e. we declare the theory "wrong" and discard it) then it would not make sense to consider a theory falsifiable in certain situations but not in others. However, if we accept that scientific theories must always be approximate and provisional then one might be willing to move a theory from the hard core to the protective belt (or vice versa) based on the context of the prediction.
I'll leave my discussion of Lakatos' research programmes for now, because I think Brody's concept of scope can illuminate some of these issues. I'll tackle that in my next essay.
Labels:
falsification,
Lakatos,
philosophy of science
Sunday, September 2, 2007
Approximation in Physics
Science is all about approximations. The importance of approximation is perhaps most clear in the discipline of physics, which deals with simpler systems than does chemistry or biology. In any case, physics is my discipline so I will focus mainly on approximation in physics. Certainly it is in physics that it is easiest to quantify our approximations, and this may make the role of approximation stand out more clearly in physics than it does in other sciences. Nevertheless, approximation is an important, indeed a fundamental, part of any science.
Let's start by distinguishing two ways that approximation shows up in science (for now I'll focus on physics). One way to make use of approximation is within a model. When I use the word model I mean something a bit broader than what is meant when we talk about "the Bohr model of the atom." I would like to use the term "model" in something like the way that it is used in "the Standard Model of Particle Physics", although without any assumption that the model represents anything fundamental. (For that matter, I have doubts that the Standard Model represents anything fundamental, but that's another story - sort of.) I can make this clearer by describing a specific example. Let's take a model of the solar system. We might posit that each object (planet, asteroid, Sun, etc.) can be represented by a mathematical point in a three-dimensional (Euclidean) space. Each object is also assigned a mass. We then state that these point masses move in such a way that they obey Newton's three Laws of Motion combined with Newton's Universal Law of Gravitation. Now if we specify the initial conditions for each point mass (position and velocity) as some time t=0, then we can determine the state of our model at any future time. In essence, by "model" I mean a description of a system that includes a list of the basic entities of which the system is composed along with the relevant properties of those entities, as well as any rules or laws that dictate the behavior of those entities. I'll leave it open whether or not the initial conditions constitute part of the model.
Now when I talk about using approximations within a model I mean finding the approximate behavior of the entities in the model, rather than the exact behavior according to the rules inherent in the model. For example, in our solar system model we could decide to ignore the gravitational effects of small bodies like asteroids and instead focus only on the gravitational pull of the planets (and perhaps a few larger moons). Note that this approximation can actually take two forms. One way to approach this approximation is to cut the small bodies out of the model entirely. In other words, let's just pretend for the moment that there are no asteroids or small moons in the solar system. This gives us a new, simpler model in which the list of basic entities is different from that of our original model. An entirely different approach to this approximation would be to keep the asteroids and small moons, but ignore the gravitational pull of these objects on the planets, etc. (but not the pull of the planets on the asteroids and small moons). This gives us a new model that has the same list of basic entities, but which follows a different set of laws. In particular, violations of Newton's Third Law of Motion (that objects exert equal magnitude forces on each other in opposite directions) and Newton's Universal Law of Gravitation (that all massive bodies exert gravitational forces on all other massive bodies) abound in this model. There is a sense in which this version of our approximation actually leads to a much more complicated model (sometimes the Third Law is satisfied, sometimes not), but in practice it turns out to be easier to make quantitative predictions in this model than in the original one.
Note that which of these two approaches you take will depend heavily on what you wish to get from your model. For example, if you are trying to predict the motion of Mercury either version of this approximation might do the job nicely but the first version will probably be easier to use. On the other hand, if you are trying to predict possible asteroid impacts on Earth then clearly the first version of the approximation will not do the trick (I can tell you what THAT version will predict right now!). This raises a critical point about how approximations are used in science. The validity of an approximation cannot generally be evaluated on its own. Rather, it can only be evaluated in light of the goals of the scientist. For some purposes the first version of our approximation might be perfectly valid, but for other purposes it will be completely useless. This point may not seem all that striking when we are discussing approximation within a model, but it will also be important for our discussion of the second way that approximation is used in science.
The second way that approximation is used is that our model itself is an approximation of a real physical system (or at least a possible physical system - as a computational physicist I often employ models that likely don't correspond to anything that currently exists in the real world, but give the atom optics people enough time and they'll probably create one). Our solar system model is presumably meant to be a model of the actual planets and other bodies that orbit our Sun. But even the original version of our model was very much an approximation. To begin with, we ignore every property of the objects in the solar system other than their position (and associated time derivatives) and mass. We ignore their chemical composition, their temperature, their albedo, their rotational angular momentum, and on, and on. Furthermore, we ignore the fact that there is a lot of stuff outside the solar system that could, at least in principle, influence the motion of the objects within the solar system. These approximations amount to making cutoffs in our list of basic entities and cutoffs in our list of the relevant properties of those entities. But that's not where the approximations stop. We also make approximations in regard to the laws or rules of our model. Our solar system model ignores all forces other than gravitation. We also chose to use Newton's Laws of Motion, which we know are only approximately correct. Indeed, we completely ignore quantum effects (perhaps justifiably) as well as relativistic effects (that's harder to justify - after all, we know we won't get Mercury's orbit right if we don't use General Relativity).
Now after thinking hard about all the approximations that are intrinsic to our model of the solar system, we might begin to question whether the model is actually good for anything. It seemed pretty good at first (after all, we were prepared to keep track of a point mass to represent every object in the solar system). But now it seems like a joke. How can we convince ourselves that it is not? In other words, how can we convince ourselves that this model is valid? Well, we saw in that in the case of approximations made within a model the question of validity hinged on our purpose in using the model. The same is true here. Suppose we want to predict what the Moon will look like from Earth on a given night in the future. Well, if all we want to know is the phase of the Moon then our model may do the trick since it can predict (approximately) the location of the Moon relative to Earth and the Sun. If, on the other hand, we want to be able to say something about how bright the Moon will appear then we are lost since that will depend on the nuclear reactions that produce the Sun's luminosity, the albedo of the lunar surface, as well as local atmospheric effects on Earth. None of that is in our model, so we have no hope. So in some cases we can show that the model is invalid for a particular purpose without even making a prediction. For example, if we want to predict properties of objects that are not included in the model (either the properties or the objects or both) then the model will not be valid for that purpose. But if we cannot invalidate the model that way, the best way to test it is to make predictions with the model and see how they stand up against the actual behavior of the physical system we are trying to model. If we wanted to predict Earth's orbit around the Sun for the next one hundred years we would probably find that our original solar system model would do a bang-up job.
There may be other ways to validate our approximations, particularly if they can be quantified. For example, we can justify ignoring quantum effects in our solar system model because they simply won't show any noticeable effect at the scale we are examining. We can justify ignoring the gravitational forces exerted on the planets by stars other than the Sun because those forces are negligibly small. Again, though, whether or not these approximations are acceptable will depend on our purpose for using the model (just how precisely do you want to predict Earth's orbit?).
Now we're getting close to the main point I want to make. I hope it is clear that approximations pervade everything we do in physics. Approximations are such a fundamental part of doing physics that physicists tend to forget they are there. But all of physics is an approximation. It is simply not possible to do any physics without making some approximations. We must truncate the list of basic entities in our model (otherwise we would have to include everything in the Universe - and I don't just mean the visible Universe). We must truncate the list of properties that the basic entities possess (otherwise we must consider all possible properties - and who can say what unknown properties might be possessed by some object in the Universe?). Ideally we will make use of our best physical laws, but even these are almost certainly approximations.
I probably wouldn't get much argument from other physicists about what I've said so far, but now I'd like to claim something more controversial. I am convinced that understanding the role of approximation in physics makes it impossible to believe in a "final theory" (or at least impossible to believe that we will ever find the final theory). If we are willing to acknowledge that everything we now call physics is, at some level, an approximation then we must accept that any new theories must potentially be approximate as well. How would we be able to determine that a new theory was not an approximation, that it was in fact the "final" theory? Presumably we would use the theory (or model) to make predictions and then see if those predictions match our observations or experimental results (and I'll pretend for the moment that we can do this in some obvious way and actually know whether or not the predictions match the observations - reality is much trickier than this). But even if the match was perfect that would only validate the theory for that prediction (or at best for some class of predictions). We can only test the validity of an approximation in the context of some particular purpose. To show that a theory is universally valid (i.e that it is a final theory) we would have to show that it is valid for all possible purposes. I simply don't think that it is possible to do this. I'm really not even sure what it would mean to make the claim that a theory is valid for all possible purposes.
So I don't believe in a final theory. I believe that all of physics as it currently exists is an approximation, although an astoundingly good approximation for most of the purposes for which we have used it. I believe that all future physics will still be an approximation, although it will likely be a better approximation for some of our old purposes and a valid approximation for some purposes of which we have yet to conceive. But physics will always have a limited scope (I'll save a detailed discussion of the idea of scope for a later essay - or just read Thomas Brody). The only argument I can see against this point of view is that if we find a theory that works well for all purposes that we can think of now, then it will probably work well for all purposes we devise in the future. But evaluating a theory based on its "logical probability" does not work well (I'll try to write about this later, but Karl Popper has already made the point pretty well). I am convinced that any "Dreams of a Final Theory" will remain just that (with or without the Superconducting Supercollider).
P.S. Yes, I've read the book by Stephen Weinberg whose title I quote above. No, I am not trying to criticize that book or Weinberg himself. In fact, I had two semesters of quantum field theory from Weinberg when I was a grad student. The man clearly knows a lot more about physics than I do, and he has thought a lot more about philosophy than the vast majority of physicists. Even if, as a pond minnow, I wished to pick a fight with one of the great Leviathans of the sea, I would not choose to pick a fight with Weinberg!
Let's start by distinguishing two ways that approximation shows up in science (for now I'll focus on physics). One way to make use of approximation is within a model. When I use the word model I mean something a bit broader than what is meant when we talk about "the Bohr model of the atom." I would like to use the term "model" in something like the way that it is used in "the Standard Model of Particle Physics", although without any assumption that the model represents anything fundamental. (For that matter, I have doubts that the Standard Model represents anything fundamental, but that's another story - sort of.) I can make this clearer by describing a specific example. Let's take a model of the solar system. We might posit that each object (planet, asteroid, Sun, etc.) can be represented by a mathematical point in a three-dimensional (Euclidean) space. Each object is also assigned a mass. We then state that these point masses move in such a way that they obey Newton's three Laws of Motion combined with Newton's Universal Law of Gravitation. Now if we specify the initial conditions for each point mass (position and velocity) as some time t=0, then we can determine the state of our model at any future time. In essence, by "model" I mean a description of a system that includes a list of the basic entities of which the system is composed along with the relevant properties of those entities, as well as any rules or laws that dictate the behavior of those entities. I'll leave it open whether or not the initial conditions constitute part of the model.
Now when I talk about using approximations within a model I mean finding the approximate behavior of the entities in the model, rather than the exact behavior according to the rules inherent in the model. For example, in our solar system model we could decide to ignore the gravitational effects of small bodies like asteroids and instead focus only on the gravitational pull of the planets (and perhaps a few larger moons). Note that this approximation can actually take two forms. One way to approach this approximation is to cut the small bodies out of the model entirely. In other words, let's just pretend for the moment that there are no asteroids or small moons in the solar system. This gives us a new, simpler model in which the list of basic entities is different from that of our original model. An entirely different approach to this approximation would be to keep the asteroids and small moons, but ignore the gravitational pull of these objects on the planets, etc. (but not the pull of the planets on the asteroids and small moons). This gives us a new model that has the same list of basic entities, but which follows a different set of laws. In particular, violations of Newton's Third Law of Motion (that objects exert equal magnitude forces on each other in opposite directions) and Newton's Universal Law of Gravitation (that all massive bodies exert gravitational forces on all other massive bodies) abound in this model. There is a sense in which this version of our approximation actually leads to a much more complicated model (sometimes the Third Law is satisfied, sometimes not), but in practice it turns out to be easier to make quantitative predictions in this model than in the original one.
Note that which of these two approaches you take will depend heavily on what you wish to get from your model. For example, if you are trying to predict the motion of Mercury either version of this approximation might do the job nicely but the first version will probably be easier to use. On the other hand, if you are trying to predict possible asteroid impacts on Earth then clearly the first version of the approximation will not do the trick (I can tell you what THAT version will predict right now!). This raises a critical point about how approximations are used in science. The validity of an approximation cannot generally be evaluated on its own. Rather, it can only be evaluated in light of the goals of the scientist. For some purposes the first version of our approximation might be perfectly valid, but for other purposes it will be completely useless. This point may not seem all that striking when we are discussing approximation within a model, but it will also be important for our discussion of the second way that approximation is used in science.
The second way that approximation is used is that our model itself is an approximation of a real physical system (or at least a possible physical system - as a computational physicist I often employ models that likely don't correspond to anything that currently exists in the real world, but give the atom optics people enough time and they'll probably create one). Our solar system model is presumably meant to be a model of the actual planets and other bodies that orbit our Sun. But even the original version of our model was very much an approximation. To begin with, we ignore every property of the objects in the solar system other than their position (and associated time derivatives) and mass. We ignore their chemical composition, their temperature, their albedo, their rotational angular momentum, and on, and on. Furthermore, we ignore the fact that there is a lot of stuff outside the solar system that could, at least in principle, influence the motion of the objects within the solar system. These approximations amount to making cutoffs in our list of basic entities and cutoffs in our list of the relevant properties of those entities. But that's not where the approximations stop. We also make approximations in regard to the laws or rules of our model. Our solar system model ignores all forces other than gravitation. We also chose to use Newton's Laws of Motion, which we know are only approximately correct. Indeed, we completely ignore quantum effects (perhaps justifiably) as well as relativistic effects (that's harder to justify - after all, we know we won't get Mercury's orbit right if we don't use General Relativity).
Now after thinking hard about all the approximations that are intrinsic to our model of the solar system, we might begin to question whether the model is actually good for anything. It seemed pretty good at first (after all, we were prepared to keep track of a point mass to represent every object in the solar system). But now it seems like a joke. How can we convince ourselves that it is not? In other words, how can we convince ourselves that this model is valid? Well, we saw in that in the case of approximations made within a model the question of validity hinged on our purpose in using the model. The same is true here. Suppose we want to predict what the Moon will look like from Earth on a given night in the future. Well, if all we want to know is the phase of the Moon then our model may do the trick since it can predict (approximately) the location of the Moon relative to Earth and the Sun. If, on the other hand, we want to be able to say something about how bright the Moon will appear then we are lost since that will depend on the nuclear reactions that produce the Sun's luminosity, the albedo of the lunar surface, as well as local atmospheric effects on Earth. None of that is in our model, so we have no hope. So in some cases we can show that the model is invalid for a particular purpose without even making a prediction. For example, if we want to predict properties of objects that are not included in the model (either the properties or the objects or both) then the model will not be valid for that purpose. But if we cannot invalidate the model that way, the best way to test it is to make predictions with the model and see how they stand up against the actual behavior of the physical system we are trying to model. If we wanted to predict Earth's orbit around the Sun for the next one hundred years we would probably find that our original solar system model would do a bang-up job.
There may be other ways to validate our approximations, particularly if they can be quantified. For example, we can justify ignoring quantum effects in our solar system model because they simply won't show any noticeable effect at the scale we are examining. We can justify ignoring the gravitational forces exerted on the planets by stars other than the Sun because those forces are negligibly small. Again, though, whether or not these approximations are acceptable will depend on our purpose for using the model (just how precisely do you want to predict Earth's orbit?).
Now we're getting close to the main point I want to make. I hope it is clear that approximations pervade everything we do in physics. Approximations are such a fundamental part of doing physics that physicists tend to forget they are there. But all of physics is an approximation. It is simply not possible to do any physics without making some approximations. We must truncate the list of basic entities in our model (otherwise we would have to include everything in the Universe - and I don't just mean the visible Universe). We must truncate the list of properties that the basic entities possess (otherwise we must consider all possible properties - and who can say what unknown properties might be possessed by some object in the Universe?). Ideally we will make use of our best physical laws, but even these are almost certainly approximations.
I probably wouldn't get much argument from other physicists about what I've said so far, but now I'd like to claim something more controversial. I am convinced that understanding the role of approximation in physics makes it impossible to believe in a "final theory" (or at least impossible to believe that we will ever find the final theory). If we are willing to acknowledge that everything we now call physics is, at some level, an approximation then we must accept that any new theories must potentially be approximate as well. How would we be able to determine that a new theory was not an approximation, that it was in fact the "final" theory? Presumably we would use the theory (or model) to make predictions and then see if those predictions match our observations or experimental results (and I'll pretend for the moment that we can do this in some obvious way and actually know whether or not the predictions match the observations - reality is much trickier than this). But even if the match was perfect that would only validate the theory for that prediction (or at best for some class of predictions). We can only test the validity of an approximation in the context of some particular purpose. To show that a theory is universally valid (i.e that it is a final theory) we would have to show that it is valid for all possible purposes. I simply don't think that it is possible to do this. I'm really not even sure what it would mean to make the claim that a theory is valid for all possible purposes.
So I don't believe in a final theory. I believe that all of physics as it currently exists is an approximation, although an astoundingly good approximation for most of the purposes for which we have used it. I believe that all future physics will still be an approximation, although it will likely be a better approximation for some of our old purposes and a valid approximation for some purposes of which we have yet to conceive. But physics will always have a limited scope (I'll save a detailed discussion of the idea of scope for a later essay - or just read Thomas Brody). The only argument I can see against this point of view is that if we find a theory that works well for all purposes that we can think of now, then it will probably work well for all purposes we devise in the future. But evaluating a theory based on its "logical probability" does not work well (I'll try to write about this later, but Karl Popper has already made the point pretty well). I am convinced that any "Dreams of a Final Theory" will remain just that (with or without the Superconducting Supercollider).
P.S. Yes, I've read the book by Stephen Weinberg whose title I quote above. No, I am not trying to criticize that book or Weinberg himself. In fact, I had two semesters of quantum field theory from Weinberg when I was a grad student. The man clearly knows a lot more about physics than I do, and he has thought a lot more about philosophy than the vast majority of physicists. Even if, as a pond minnow, I wished to pick a fight with one of the great Leviathans of the sea, I would not choose to pick a fight with Weinberg!
Why Noninertial Frame?
I'll try to answer two versions of the above question. First let me answer this version: why have I created this blog (which happens to be titled "Noninertial Frame")? My reason is fairly simple. I am a professional physicist and a professor at a liberal arts college. Both my undergraduate and doctoral degrees are from research universities, but somewhere along the way I picked up the liberal-arts mindset. I have a deep love for the liberal arts, including physics. Yes, physics is one of the liberal arts. It is part of the Quadrivium (which consists of arithmetic, geometry, music, and astronomy or cosmology - I'm counting this last as including physics). Although physics is my first academic love, it is not my only one. I am particularly interested in philosophy (part of the Trivium which includes grammar, rhetoric, and logic), primarily in philosophy of science. As I have no professional training or expertise in any area other than physics (and a bit of math), I do not expect to engage in philosophy (or any other discipline except physics) in a professional capacity. But I love thinking about it and I love writing about it (and the writing helps me clarify my thinking). What better forum for unprofessional writing than a blog? With a blog I may actually find a reader or two who is interested in my thoughts, and I'll always know where to find that essay I wrote a few years back...
Now for the second version of the above question: why have I chosen to name my blog Noninertial Frame? Well, the term is used in physics for a reference frame that does not obey Newton's Laws of Motion (specifically the Law of Inertia). Generally this means the reference frame is accelerating with respect to frames which do obey Newton's Laws. Why is this apropos for my blog? First of all, it's a physics term and I am a physicist. I expect everything I write to be connected to physics, if not entirely about physics. Second, I am writing from the perspective of a liberal arts physicist for whom the usual rules for physicists (focus on research, don't muck about with philosophy, etc.) don't seem to apply. Finally, I'm hoping that this blog will help me maintain my interest in the connections between physics and the other liberal arts, so that I don't settle into the inertia of a routine career in physics teaching and research.
Now, what can you expect to find here. Mostly I will post essays (occasionally lengthy ones, I would guess) about issues related to physics but not strictly within the domain of physics per se. These essays will mostly be philosophical, but I will likely work in some mathematics and maybe a few discussions of literature and music (to round out the seven liberal arts). I'll keep it all tied to physics, though, because if it has nothing to do with physics then there is probably no reason for anyone to read what I write.
I hope this blog will be useful for someone other than me, but if not then I can live with that. Cheers!
Now for the second version of the above question: why have I chosen to name my blog Noninertial Frame? Well, the term is used in physics for a reference frame that does not obey Newton's Laws of Motion (specifically the Law of Inertia). Generally this means the reference frame is accelerating with respect to frames which do obey Newton's Laws. Why is this apropos for my blog? First of all, it's a physics term and I am a physicist. I expect everything I write to be connected to physics, if not entirely about physics. Second, I am writing from the perspective of a liberal arts physicist for whom the usual rules for physicists (focus on research, don't muck about with philosophy, etc.) don't seem to apply. Finally, I'm hoping that this blog will help me maintain my interest in the connections between physics and the other liberal arts, so that I don't settle into the inertia of a routine career in physics teaching and research.
Now, what can you expect to find here. Mostly I will post essays (occasionally lengthy ones, I would guess) about issues related to physics but not strictly within the domain of physics per se. These essays will mostly be philosophical, but I will likely work in some mathematics and maybe a few discussions of literature and music (to round out the seven liberal arts). I'll keep it all tied to physics, though, because if it has nothing to do with physics then there is probably no reason for anyone to read what I write.
I hope this blog will be useful for someone other than me, but if not then I can live with that. Cheers!
Subscribe to:
Posts (Atom)