Nature just published an opinion piece by George Ellis and Joe Silk warning that the “integrity of physics” is under threat from people who claim that physical theories should no longer require experimental verification. The article seems quite strange to me. It contains some well-argued points but mixes them with some muddled thinking. (This is a subject that seems to bring out this sort of thing.)
Although this article seems to me to be flawed, I have enormous respect for the authors, particularly for Joe Silk. Very, very far down the list of his accomplishments is supervising my Ph.D. research. I know from working with him that he is a phenomenal scientist.
I’ll comment on a few specific subjects treated in the article.
1. String theory.
The section on string theory is the best part of the article (by which I mean it’s the part that I agree with most).
As is well known, string theory has become very popular among some theoretical physicists, despite the fact that it has no prospect of experimental test in the foreseeable future. Personally, I have a fair amount of sympathy with Silk and Ellis’s view that this is an unhealthy situation.
Silk and Ellis use an article by the philosopher Richard Dawid to illustrate the point of view they disagree with. I’d never heard of this article, but it does seem to me to reflect the sorts of arguments that string theory partisans often make. Dawid seems to me to accept a bit uncritically some elements of string-theory boosterism, such as that it’s the only possible theory that can unify gravity with quantum physics and that it is “structurally unique” — meaning that it reproduces known physics with no adjustable parameters.
Silk and Ellis:
Dawid argues that the veracity of string theory can be established through philosophical and probabilistic arguments about the research process. Citing Bayesian analysis, a statistical method for inferring the likelihood that an explanation fits a set of facts, Dawid equates confirmation with the increase of the probability that a theory is true or viable. But that increase of probability can be purely theoretical. Because “no-one has found a good alternative” and “theories without alternatives tended to be viable in the past”, he reasons that string theory should be taken to be valid.
I don’t think Dawid actually mentions Bayesian reasoning explicitly, but this does seem to be a fair characterization of his argument.
I have no problem in principle with the idea that a theory can be shown to be highly probable based only on theoretical arguments and consistency with past data. Suppose that someone did manage to show that string theory really did have “structural uniqueness” — that is, that the mathematics of the theory could be used to derive all of the parameters of the standard model of particle physics (masses of all the elementary particles, strengths of their interactions, etc.) with no adjustable parameters. That would be overwhelming evidence that string theory was correct, even if the theory never made a novel prediction. That hasn’t happened, and I don’t see evidence that it’s likely to happen, but it’s a logical possibility.
The last sentence of the above quote is a good reason for skepticism. In your Bayesian reasoning, you should give significant prior weight to the possibility that we simply haven’t thought of the right approach yet. So although “This is the only viable approach we’ve thought of” does constitute some evidence in favor of the theory, it provides less evidence than some string theorists seem to think.
So in the end I agree with Silk and Ellis that string theory should be regarded with great skepticism, although I’m not entirely in accord with their reasons.
2. Multiverse cosmology.
Silk and Ellis are quite unhappy with theories in which our observable Universe is just part of a much larger multiverse, and particularly with the sort of anthropic reasoning that’s often combined with multiverse theories. They claim that this sort of reasoning is anti-scientific because it doesn’t satisfy Karl Popper’s falsifiability criterion. This is a weak line of argument, as there are lots of reasons to regard Popperian falsifiability as too blunt an instrument to characterize scientific reasoning.
Silk and Ellis use an essay by Sean Carroll as their exemplar of the dangers of this sort of reasoning:
Earlier this year, championing the multiverse and the many-worlds hypothesis, Carroll dismissed Popper’s falsifiability criterion as a “blunt instrument”. He offered two other requirements: a scientific theory should be “definite” and “empirical”. By definite, Carroll means that the theory says “something clear and unambiguous about how reality functions”. By empirical, he agrees with the customary definition that a theory should be judged a success or failure by its ability to explain the data.
He argues that inaccessible domains can have a “dramatic effect” in our cosmic back-yard, explaining why the cosmological constant is so small in the part we see. But in multiverse theory, that explanation could be given no matter what astronomers observe. All possible combinations of cosmological parameters would exist somewhere, and the theory has many variables that can be tweaked.
The last couple of sentences are quite unfair. The idea behind these anthropic multiverse theories is that they predict that some outcomes are far more common than others. When we observe a particular feature of our Universe, we can ask whether that feature is common or uncommon in the multiverse. If it’s common, then the theory had a high probability of producing this outcome; if it’s uncommon, the probability is low. In terms of Bayesian reasoning (or as I like to call it, “reasoning”), the observation would then provide evidence for or against the theory.
Now you can argue that this is a bad approach in various ways. To take the most obvious line of attack, you can argue that in any particular theory the people doing the calculations have done them wrong. (Something called the “measure problem” may mean that the ways people are calculating the probabilities are incorrect, and there are other possible objections.) But those objections have nothing to do with the supposed looming menace of anti-empiricism. If you make such an argument, you’re saying that the theory under discussion is wrong, not that it’s unempirical. In other words, if that’s the problem you’re worried about, then you’re having a “normal” scientific argument, not defending the very nature of science itself.
3. Many-worlds interpretation of quantum mechanics.
This is just a little thing that irked me.
Silk and Ellis:
The many-worlds theory of quantum reality posed by physicist Hugh Everett is the ultimate quantum multiverse, where quantum probabilities affect the macroscopic. According to Everett, each of Schrödinger’s famous cats, the dead and the live, poisoned or not in its closed box by random radioactive decays, is real in its own universe. Each time you make a choice, even one as mundane as whether to go left or right, an alternative universe pops out of the quantum vacuum to accommodate the other action.
Personally, I’m a fan of this interpretation of quantum mechanics. But as far as I know, everyone acknowledges that the Everett interpretation is “just” an interpretation, not a distinct physical theory. In other words, it’s generally acknowledged that the question of whether to accept the Everett interpretation is outside the domain of science, precisely because the interpretation makes no distinct physical predictions. So it’s disingenuous to cite this as an example of people dragging science away from empiricism.
Great minds think alike. See some posts from Sabine’s blog:
http://backreaction.blogspot.de/2014/07/post-empirical-science-is-oxymoron.html
Here’s another one: http://backreaction.blogspot.de/2014/12/10-things-you-didnt-know-about.html