Is the LHC doomed by signals from the future?

I guess this piece in the NY Times has been getting some attention lately.  It’s about a crazy theory by Nelson and Ninomiya (NN for short) in which the laws of physics don’t “want” the Higgs boson to be created.  According to this theory, states of the Universe in which lots of Higgses are created are automatically disfavored: if there are multiple different ways something can turn out, and one involves creating Higgses, then it’ll turn out some other way.  Since the Large Hadron Collider is going to attempt to find the Higgs, this theory predicts that things will happen to it so that it fails to do so.

Sean Carroll has a nice exegesis of this.  I urge you to go read it if you’re curious about this business.  There’s a bit in the middle that explains the theory in a bit more detail than you might like (unless of course you like that sort of thing).  If you find yourself getting bogged down when he talks about “imaginary action” and the like, just skip ahead a few paragraphs to about here:

So this model makes a strong prediction: we're not going to be producing any Higgs bosons. Not because the ordinary dynamical equations of physics prevent it (e.g., because the Higgs is just too massive), but because the specific trajectory on which the universe finds itself is one in which no Higgses are made.

That, of course, runs into the problem that we have every intention of making Higgs bosons, for example at the LHC. Aha, say NN, but notice that we haven't yet! The Superconducting Supercollider, which could have found the Higgs long ago, was canceled by Congress. And in their December 2007 paper €” before the LHC tried to turn on €” they very explicitly say that a "natural" accident will come along and break the LHC if we try to turn it on. Well, we know how that turned out.

I think Sean’s overall point of view is pretty much right:

At the end of the day: this theory is crazy. There's no real reason to believe in an imaginary component to the action with dramatic apparently-nonlocal effects, and even if there were, the specific choice of action contemplated by NN seems rather contrived. But I'm happy to argue that it's the good kind of crazy. The authors start with a speculative but well-defined idea, and carry it through to its logical conclusions. That's what scientists are supposed to do. I think that the Bayesian prior probability on their model being right is less than one in a million, so I'm not going to take its predictions very seriously. But the process by which they work those predictions out has been perfectly scientific.

Because I’m obsessed with Bayesian probabilities, I want to pick up a bit on that aspect of things.  NN propose an experiment to test their theory.  We take a million-card deck of cards, in which one says “Don’t turn on the LHC.”  We pick a card at random from the deck, and if we get that one card, we junk the LHC.  Otherwise, we go ahead and search for the Higgs as planned.  According to NN, if their theory is right, that card will come up because the Universe will want to “protect itself” from Higgses.

I don’t think I buy this, though.   I don’t think there’s any circumstance in which this proposed experiment will provide a good test of NN’s theory.  To see why, we have to dig into the probabilities a bit.

Suppose that the Bayesian prior probability of NN’s theory being true (that is, our estimate of the probability before doing any tests) is p(NN).  As Sean notes, p(NN) is a small number.  Also, let p(SE) be the probability that Something Else (a huge fire, an earthquake, whatever) destroys the LHC before it finds the Higgs.  Finally, let p(C) be the probability that we draw the bad card when we try the experiment.  We get to choose p(C), of course, simply by choosing the number of cards in the deck.  So how small should we make it?  There are two constraints:

  1. We have to choose  p(C) to be larger than p(SE).  Otherwise, presumably, even if NN’s theory is true, the Universe is likely to save itself from the Higgs simply by causing the fire, so the card experiment is unlikely to tell us anything.
  2. We have to choose p(C) to besmaller than p(NN).  The idea here is that if p(C) is too large, then our level of surprise when we pick that one card isn’t great enough to overcome our initial skepticism.  That is, we still wouldn’t believe NN’s theory even after picking the card.  Intuitively, I hope it makes sense that there must be such a constraint — if we did the experiment with 10 cards, it wouldn’t convince anyone!  The fact that the constraint is that p(C)<p(NN) comes from a little calculation using Bayes’s theorem.  Pester me if you want details.

In order for it to be possible to design an experiment that meets both of these constraints, we need p(SE)<p(NN).  That is, we need to believe, right now, that NN’s crazy theory is more likely than the union of all of the possible things that could go catastrophically wrong at the LHC. Personally, I think that’s extremely far from being the case,which means that NN’s proposed test of their theory is impossible even in principle.

(Constraint 1 already makes the experiment impossible in practice: it says that we have to take a risk with the LHC that is greater than all the other risks.  Good luck getting the hundreds of people whose careers are riding on the LHC to assume such a  risk.)

Published by

Ted Bunn

I am chair of the physics department at the University of Richmond. In addition to teaching a variety of undergraduate physics courses, I work on a variety of research projects in cosmology, the study of the origin, structure, and evolution of the Universe. University of Richmond undergraduates are involved in all aspects of this research. If you want to know more about my research, ask me!