Americans above a certain age will remember that the SAT used to include a category of “analogy questions” of the form “puppy : dog :: ______ : cow.” (This is pronounced “puppy is to dog as blank is to cow,” and the answer is “calf.”) Upon reading Peter Coles’s snarky and informative blog post about the recent quasi-news about the Higgs particle, I thought of one of my own. (By the way, Peter’s post is worth reading for several reasons, not least of which is his definition of the word “compact” as it is used in particle physics.)
Answer after the jump.
The answer is “magnitudes.”
Results like the recent LHC ones are always couched in terms of “sigmas”: ATLAS sees an excess signal at 2.3 sigma, for instance. What does 2.3 sigma mean in this context? It means that there is a certain smallish probability that the signal they saw could have been produced by random fluctuations, even if there’s nothing really there. For instance, if I haven’t made a mistake, “2.3 sigma” means a probability of about 2% that the result could have arisen by chance.
The probability is the interesting number: the lower that probability, the more exciting the result. But the quoted value (the number of sigmas) is related to the probability in an obscure way:
p = erfc(n / sqrt(2)),
where n is the number of sigmas, p is the probability, and erfc is the “complementary error function.” If (like most people) you don’t know what the complementary error function is, then you don’t know how to convert uninteresting sigmas into interesting probabilities.
In certain specific situations, the number of sigmas is easier to calculate than the probability. To be specific, if your errors are normally distributed then 2.3 sigma means that your signal differs from zero (or whatever you expected it to be in the absence of a signal) by 2.3 times the standard deviation. But for things like the LHC results, people calculated the probabilities first (via Monte Carlo simulations), and then translated them into sigmas.
That’s right: they started with the interesting number (probability), applied a certain fairly obscure mathematical transformation to it, and reported that, with the result that everyone has to perform the inverse transformation to get back to the interesting number.
The situation is quite similar to the magnitude system in astronomy. Magnitudes are a way of characterizing the brightnesses of stars. The physically sensible way to characterize a star’s brightness is to give a number called the flux, which is just the amount of power from the star striking your detector, per unit area of the detector. But traditionally astronomers don’t tell you flux; they tell you magnitude.
Magnitudes are related to fluxes by a certain mathematical transformation,
magnitude = -2.5 log(flux) + constant,
so if you know magnitudes you can get fluxes, but that’s no excuse for converting from a quantity with an obvious physical meaning to something arbitrary and obscure.
(By the way, Peter’s objection to sigmas is at least in part something else besides this: he’s exercised about the whole Bayesian-frequentist business. He’s a Bayesian, as all right-thinking people are.)
Is there some international body that has set a 5 sigma result as being a definite detection of a particle (as opposed to a 4.9 sigma result being strongly suggestive of a particle but not definitive)? And why 5 sigma? Why not 4 or 6?