Thomas Bayes says that I shouldn’t believe in hydrinos

My recent post about Blacklight Power’s claim that there’s a lower-energy state of hydrogen (which they call a hydrino) seems to have gotten a bit of interest — at least enough to show that this blog isn’t completely a form of write-only memory. I want to follow up on it by explaining in a bit more detail why I’m so skeptical of this claim.  In the process, I’ll say a bit about Bayesian inference.  The idea of Bayesian inference is, in my opinion, at the heart of what science is all about, and it’s a shame that it’s not talked about more.

But before we get there, let me summarize my view on hydrinos.  If Randell Mills (the head of Blacklight Power, and the main man behind hydrinos) is right, then the best-tested theory in the history of science is not merely wrong, but really really wrong.  This is not a logical impossibility, but it’s wildly unlikely.

Let me be more precise about the two italicized phrases in the previous paragraph.

1. “The best-tested theory in the history of science.”  The theory I’m talking about is quantum physics, more specifically the quantum mechanics of charged  particles interacting electromagnetically, and even more specifically the theory of quantum electrodynamics (QED).  Part of the reason for calling it the best-tested theory in science is some amazingly high-precision measurements, where predictions of the theory are matched to something like 12 decimal places.  But more important than that are the tens of thousands of experiments done in labs around the world all the time, which aren’t designed as tests of QED, but which depend fundamentally on QED.

Pretty much all of atomic physics and most of solid state physics (which between them are a substantial majority of physics research) depends on QED.  Also, every time an experiment is done at a particle accelerator, even if its purpose is to test other physics, it ends up testing QED. This has been true for decades.  If there were a fundamental flaw in QED, it is inconceivable that none of these experiments would have turned it up by now.

2. “Really really wrong.  There’s a hierarchy of levels of “wrongness” of scientific theories.  For instance, ever since Einstein came up with general relativity, we’ve known that Newton’s theory of gravity was wrong: in some regimes, the two theories disagree, and general relativity is the one that agrees with experiment in those cases.  But Newtonian gravity is merely wrong, not really really wrong.

It’s intuitively clear that there’s something deeply “right” about Newtonian gravity — if there weren’t, how could NASA use it so successfully (most of the time) to guide space probes to other planets?  The reason for this is that Newtonian gravity was subsumed, not replaced, by general relativity.  To be specific, general relativity contains Newtonian gravity as a special limiting case.  Within certain bounds, Newton’s laws are a provably good approximation to the more exact theory.  More importantly, the bedrock principles of Newtonian gravity, such as conservation of energy, momentum,  and mass, have a natural place in the broader theory.  Some of these principles (e.g., energy and momentum conservation) carry over into exact laws in the new theory.  Others (mass conservation) are only approximately valid within certain broad limits.

So Newtonian gravity is an example of a theory that’s wrong but not really really wrong.  I would describe a theory as really really wrong if its central ideas were found to be not even approximately valid, in a regime in which the theory is well-tested.  The existence of hydrinos would mean that QED was really really wrong in this sense.  To be specific, it would mean that bedrock principles of quantum physics, such as the Heisenberg uncertainty principle, were not even approximately valid in the arena of interactions of electrons and protons at length scales and energy scales of the order of atoms.  That’s an arena in which the theory has been unbelievably well tested.

There has never been an instance in the history of science when a theory that was tested even 1% as well as QED was later proved to be really really wrong.  In fact, my only hesitation in making that statement is that it’s probably a ridiculous understatement.

So what?  In the cartoon version of science, even one experiment that contradicts a theory proves the theory to be wrong.  If Randell Mills has done his experiment right, then quantum physics is wrong, no matter how many previous experiments have confirmed it.

The problem with this cartoon version is that results of experiments are never 100% clear-cut.  The probability that someone has made a mistake, or misinterpreted what they saw, might be very low (especially if an experiment has been replicated many times and many people have seen the data), but it’s never zero.  That means that beliefs about scientific hypotheses are always probabilistic beliefs.

When new evidence comes in, that evidence causes each person to update his or her beliefs about the probabilities of various hypotheses about the world.  Evidence that supports a hypothesis causes us to increase our mental probability that that hypothesis is true, and vice versa. All of this sounds obvious, but it’s quite different from the black-and-white cartoon version that people are often taught.  (I’m looking at you, Karl Popper.  Falsifiability?  Feh.)

All of this brings us at last to Bayesian inference.  This is the systematic way of understanding the way that probabilities get updated in the light of new evidence.

Bayes’s Theorem relates  a few different probabilities to each other:

  • The posterior probability P(H | E),  which is the probability that a hypothesis is true, given an experimental result.
  • The prior probability P(H), which is the probability that the hypothesis is true, without considering that experimental result.
  • The evidence P(E | H), which is the probability that the experimental result would have occurred, assuming that the hypothesis is true.

The prior and posterior probabilities are your subjective assessments of the likelihood of the hypothesis, before and after you saw the experimental result.  Bayes’s Theorem tells you quantitatively how different the posterior probability is from the prior probability — that is, how much you’ve changed your mind  as a result of the new experiment.  Specifically, the theorem says that an experiment that supports a hypothesis (that is, one in which P(E|H) is large) causes you to raise your estimate of the probability of the hypothesis being correct, while an experiment that contradicts the hypothesis (one with a low value of P(E|H)) causes you to reduce it.

Here’s the thing:  in situations where the prior probability was already very large or very small, the posterior probability tends not to change very much unless the evidence is similarly extreme.  For example, say the hypothesis is that quantum physics is not really really wrong.  Before I’d ever heard of Randell Mills, the prior probability of that hypothesis was extremely high (because of the accumulation of nearly a century of experimental evidence).  Let’s put it at 0.999999 (although that’s actually way too low).  Now let’s suppose that Mills has done his experiment really well, so that there’s only a one in ten thousand chance that he’s made a mistake or misinterpreted his results.  That means that the evidence (the probability that quantum physics is OK given his results) is only 0.0001. If you put those numbers into Bayes’s theorem and turn the crank, you find that the posterior probability is still 99%.  That is, even if I have very high confidence in the quality of the experiment, the overwhelming prior evidence means that I’ll still think it’s probably wrong.

A good analogy here is the probability of getting a false positive result in a test for a rare disease: Even if the test has a very low rate of false positives, if the disease is sufficiently rare your positive result has a high probabliity of being false.

All of this is just a long-winded way of talking about the old scientific maxim that extraordinary claims require extraordinary evidence. The nice thing about it is that it’s also quantitative: it tells you how extraordinary the evidence must be.

Published by

Ted Bunn

I am chair of the physics department at the University of Richmond. In addition to teaching a variety of undergraduate physics courses, I work on a variety of research projects in cosmology, the study of the origin, structure, and evolution of the Universe. University of Richmond undergraduates are involved in all aspects of this research. If you want to know more about my research, ask me!

4 thoughts on “Thomas Bayes says that I shouldn’t believe in hydrinos”


  1. Just posted this same comment on the last BLP blog — didn’t realize a follow up blog had been written more recently; re-posting it here

    BLP has been publishing papers on Balmer line broadening which have been observed in various hydrogen plasmas for a while now. Various other labs have also noted the effect and attempted to explain the broadening in conventional terms.

    This is an interesting paper I just came across on arxiv which summarizes, to date, the various proposed models to explain the effect and how the experimental observations stack up against each model:

    http://arxiv.org/abs/0810.5280

  2. Now, I'm not saying I think Mills is right. I have not nearly enough knowledge of physics (quantum or otherwise) to make any inferences based on the substance of the science he has done.

    But I do know a little bit about Bayes' Theorem, and what you choose to count as evidence is non-trivial when you're applying it to entire an scientific paradigm.

    So, it is worth adding a few more pieces to the probability calculation for P(Mills_is__roughly_correct & QP_is_wrong):

    1) There is no evidence that he is a fraud
    2) He's at least reasonably clever
    3) He has staked his career on results to emerge over the next 12 months

    We don't usually get 1, 2, _and_ 3 from the same individual. There are clever frauds and stupid scientists, but I've never before seen instances of clever, non-fraudulent, scientists making extremely bold claims on which they've staked their career, and will be verified some time in the very near future.

    I'd guess it's still more likely that QP is right and Mills is wrong €¦ but metaphysical priors, such as the one you've articulated above, have more to do with traditions of scientific enquiry than they have to do with direct, observable evidence in the Bayesian sense.

    After all, what, 100 years ago, do you believe people would have estimated as the prior probability of P({X is a wave} AND {X is a particle}) ?

  3. All of these facts are consistent with the hypothesis that Mills is sincere but mistaken, which seems to me by far the most likely possibility.

    And by the way, I strongly disagree with your statement that “metaphysical priors, such as the one you've articulated above, have more to do with traditions of scientific enquiry than they have to do with direct, observable evidence in the Bayesian sense.” Bayesian inference is not just for “direct, observable evidence.” It applies to thinking about “traditions of scientific enquiry” as well. It’s always there whenever you assess your degree of belief in anything you’re not certain about.

Comments are closed.