My recent post about Blacklight Power’s claim that there’s a lower-energy state of hydrogen (which they call a hydrino) seems to have gotten a bit of interest — at least enough to show that this blog isn’t completely a form of write-only memory. I want to follow up on it by explaining in a bit more detail why I’m so skeptical of this claim. In the process, I’ll say a bit about Bayesian inference. The idea of Bayesian inference is, in my opinion, at the heart of what science is all about, and it’s a shame that it’s not talked about more.
But before we get there, let me summarize my view on hydrinos. If Randell Mills (the head of Blacklight Power, and the main man behind hydrinos) is right, then the best-tested theory in the history of science is not merely wrong, but really really wrong. This is not a logical impossibility, but it’s wildly unlikely.
Let me be more precise about the two italicized phrases in the previous paragraph.
1. “The best-tested theory in the history of science.” The theory I’m talking about is quantum physics, more specifically the quantum mechanics of charged particles interacting electromagnetically, and even more specifically the theory of quantum electrodynamics (QED). Part of the reason for calling it the best-tested theory in science is some amazingly high-precision measurements, where predictions of the theory are matched to something like 12 decimal places. But more important than that are the tens of thousands of experiments done in labs around the world all the time, which aren’t designed as tests of QED, but which depend fundamentally on QED.
Pretty much all of atomic physics and most of solid state physics (which between them are a substantial majority of physics research) depends on QED. Also, every time an experiment is done at a particle accelerator, even if its purpose is to test other physics, it ends up testing QED. This has been true for decades. If there were a fundamental flaw in QED, it is inconceivable that none of these experiments would have turned it up by now.
2. “Really really wrong.“ There’s a hierarchy of levels of “wrongness” of scientific theories. For instance, ever since Einstein came up with general relativity, we’ve known that Newton’s theory of gravity was wrong: in some regimes, the two theories disagree, and general relativity is the one that agrees with experiment in those cases. But Newtonian gravity is merely wrong, not really really wrong.
It’s intuitively clear that there’s something deeply “right” about Newtonian gravity — if there weren’t, how could NASA use it so successfully (most of the time) to guide space probes to other planets? The reason for this is that Newtonian gravity was subsumed, not replaced, by general relativity. To be specific, general relativity contains Newtonian gravity as a special limiting case. Within certain bounds, Newton’s laws are a provably good approximation to the more exact theory. More importantly, the bedrock principles of Newtonian gravity, such as conservation of energy, momentum, and mass, have a natural place in the broader theory. Some of these principles (e.g., energy and momentum conservation) carry over into exact laws in the new theory. Others (mass conservation) are only approximately valid within certain broad limits.
So Newtonian gravity is an example of a theory that’s wrong but not really really wrong. I would describe a theory as really really wrong if its central ideas were found to be not even approximately valid, in a regime in which the theory is well-tested. The existence of hydrinos would mean that QED was really really wrong in this sense. To be specific, it would mean that bedrock principles of quantum physics, such as the Heisenberg uncertainty principle, were not even approximately valid in the arena of interactions of electrons and protons at length scales and energy scales of the order of atoms. That’s an arena in which the theory has been unbelievably well tested.
There has never been an instance in the history of science when a theory that was tested even 1% as well as QED was later proved to be really really wrong. In fact, my only hesitation in making that statement is that it’s probably a ridiculous understatement.
So what? In the cartoon version of science, even one experiment that contradicts a theory proves the theory to be wrong. If Randell Mills has done his experiment right, then quantum physics is wrong, no matter how many previous experiments have confirmed it.
The problem with this cartoon version is that results of experiments are never 100% clear-cut. The probability that someone has made a mistake, or misinterpreted what they saw, might be very low (especially if an experiment has been replicated many times and many people have seen the data), but it’s never zero. That means that beliefs about scientific hypotheses are always probabilistic beliefs.
When new evidence comes in, that evidence causes each person to update his or her beliefs about the probabilities of various hypotheses about the world. Evidence that supports a hypothesis causes us to increase our mental probability that that hypothesis is true, and vice versa. All of this sounds obvious, but it’s quite different from the black-and-white cartoon version that people are often taught. (I’m looking at you, Karl Popper. Falsifiability? Feh.)
All of this brings us at last to Bayesian inference. This is the systematic way of understanding the way that probabilities get updated in the light of new evidence.
Bayes’s Theorem relates a few different probabilities to each other:
- The posterior probability P(H | E), which is the probability that a hypothesis is true, given an experimental result.
- The prior probability P(H), which is the probability that the hypothesis is true, without considering that experimental result.
- The evidence P(E | H), which is the probability that the experimental result would have occurred, assuming that the hypothesis is true.
The prior and posterior probabilities are your subjective assessments of the likelihood of the hypothesis, before and after you saw the experimental result. Bayes’s Theorem tells you quantitatively how different the posterior probability is from the prior probability — that is, how much you’ve changed your mind as a result of the new experiment. Specifically, the theorem says that an experiment that supports a hypothesis (that is, one in which P(E|H) is large) causes you to raise your estimate of the probability of the hypothesis being correct, while an experiment that contradicts the hypothesis (one with a low value of P(E|H)) causes you to reduce it.
Here’s the thing: in situations where the prior probability was already very large or very small, the posterior probability tends not to change very much unless the evidence is similarly extreme. For example, say the hypothesis is that quantum physics is not really really wrong. Before I’d ever heard of Randell Mills, the prior probability of that hypothesis was extremely high (because of the accumulation of nearly a century of experimental evidence). Let’s put it at 0.999999 (although that’s actually way too low). Now let’s suppose that Mills has done his experiment really well, so that there’s only a one in ten thousand chance that he’s made a mistake or misinterpreted his results. That means that the evidence (the probability that quantum physics is OK given his results) is only 0.0001. If you put those numbers into Bayes’s theorem and turn the crank, you find that the posterior probability is still 99%. That is, even if I have very high confidence in the quality of the experiment, the overwhelming prior evidence means that I’ll still think it’s probably wrong.
A good analogy here is the probability of getting a false positive result in a test for a rare disease: Even if the test has a very low rate of false positives, if the disease is sufficiently rare your positive result has a high probabliity of being false.
All of this is just a long-winded way of talking about the old scientific maxim that extraordinary claims require extraordinary evidence. The nice thing about it is that it’s also quantitative: it tells you how extraordinary the evidence must be.