Confusing graphics

 There’s a very confusing graphic in today’s New York Times:

29blowlarge.jpg

Take the last column for instance.  It seems to say that the approval rating among blacks is 57%, and the difference between democrats and blacks is 77%.  By using the sophisticated mathematical identity

(democrats – blacks) + blacks = democrats,

I find that 134% of democrats approve of stem cell research.

Is just an editing error?  Should the label “democrats minus blacks”  just read “democrats”?  That’s the only way I can make sense of it.

As long as I’m here, I’ll whine a bit more about the Times’s choice of graphics.  The Times magazine usually runs a small chart to illustrate a point related to the first short article in the magazine.  It’s clear that the editor has decided that making the graph interesting-looking is more important than making it convey information.  Here’s a particularly annoying example:

timesmagazine1.jpg

If you wanted to design a way to hide the information in a graphic, you couldn’t do much better than this.  The whole point of the pie slices is to allow a comparison of the areas, and they’re drawn in a perspective that almost perfectly hides the areas from view.  Where’s Edward Tufte when you need him?

Thomas Bayes says that I shouldn’t believe in hydrinos

My recent post about Blacklight Power’s claim that there’s a lower-energy state of hydrogen (which they call a hydrino) seems to have gotten a bit of interest — at least enough to show that this blog isn’t completely a form of write-only memory. I want to follow up on it by explaining in a bit more detail why I’m so skeptical of this claim.  In the process, I’ll say a bit about Bayesian inference.  The idea of Bayesian inference is, in my opinion, at the heart of what science is all about, and it’s a shame that it’s not talked about more.

But before we get there, let me summarize my view on hydrinos.  If Randell Mills (the head of Blacklight Power, and the main man behind hydrinos) is right, then the best-tested theory in the history of science is not merely wrong, but really really wrong.  This is not a logical impossibility, but it’s wildly unlikely.

Let me be more precise about the two italicized phrases in the previous paragraph.

1. “The best-tested theory in the history of science.”  The theory I’m talking about is quantum physics, more specifically the quantum mechanics of charged  particles interacting electromagnetically, and even more specifically the theory of quantum electrodynamics (QED).  Part of the reason for calling it the best-tested theory in science is some amazingly high-precision measurements, where predictions of the theory are matched to something like 12 decimal places.  But more important than that are the tens of thousands of experiments done in labs around the world all the time, which aren’t designed as tests of QED, but which depend fundamentally on QED.

Pretty much all of atomic physics and most of solid state physics (which between them are a substantial majority of physics research) depends on QED.  Also, every time an experiment is done at a particle accelerator, even if its purpose is to test other physics, it ends up testing QED. This has been true for decades.  If there were a fundamental flaw in QED, it is inconceivable that none of these experiments would have turned it up by now.

2. “Really really wrong.  There’s a hierarchy of levels of “wrongness” of scientific theories.  For instance, ever since Einstein came up with general relativity, we’ve known that Newton’s theory of gravity was wrong: in some regimes, the two theories disagree, and general relativity is the one that agrees with experiment in those cases.  But Newtonian gravity is merely wrong, not really really wrong.

It’s intuitively clear that there’s something deeply “right” about Newtonian gravity — if there weren’t, how could NASA use it so successfully (most of the time) to guide space probes to other planets?  The reason for this is that Newtonian gravity was subsumed, not replaced, by general relativity.  To be specific, general relativity contains Newtonian gravity as a special limiting case.  Within certain bounds, Newton’s laws are a provably good approximation to the more exact theory.  More importantly, the bedrock principles of Newtonian gravity, such as conservation of energy, momentum,  and mass, have a natural place in the broader theory.  Some of these principles (e.g., energy and momentum conservation) carry over into exact laws in the new theory.  Others (mass conservation) are only approximately valid within certain broad limits.

So Newtonian gravity is an example of a theory that’s wrong but not really really wrong.  I would describe a theory as really really wrong if its central ideas were found to be not even approximately valid, in a regime in which the theory is well-tested.  The existence of hydrinos would mean that QED was really really wrong in this sense.  To be specific, it would mean that bedrock principles of quantum physics, such as the Heisenberg uncertainty principle, were not even approximately valid in the arena of interactions of electrons and protons at length scales and energy scales of the order of atoms.  That’s an arena in which the theory has been unbelievably well tested.

There has never been an instance in the history of science when a theory that was tested even 1% as well as QED was later proved to be really really wrong.  In fact, my only hesitation in making that statement is that it’s probably a ridiculous understatement.

So what?  In the cartoon version of science, even one experiment that contradicts a theory proves the theory to be wrong.  If Randell Mills has done his experiment right, then quantum physics is wrong, no matter how many previous experiments have confirmed it.

The problem with this cartoon version is that results of experiments are never 100% clear-cut.  The probability that someone has made a mistake, or misinterpreted what they saw, might be very low (especially if an experiment has been replicated many times and many people have seen the data), but it’s never zero.  That means that beliefs about scientific hypotheses are always probabilistic beliefs.

When new evidence comes in, that evidence causes each person to update his or her beliefs about the probabilities of various hypotheses about the world.  Evidence that supports a hypothesis causes us to increase our mental probability that that hypothesis is true, and vice versa. All of this sounds obvious, but it’s quite different from the black-and-white cartoon version that people are often taught.  (I’m looking at you, Karl Popper.  Falsifiability?  Feh.)

All of this brings us at last to Bayesian inference.  This is the systematic way of understanding the way that probabilities get updated in the light of new evidence.

Bayes’s Theorem relates  a few different probabilities to each other:

  • The posterior probability P(H | E),  which is the probability that a hypothesis is true, given an experimental result.
  • The prior probability P(H), which is the probability that the hypothesis is true, without considering that experimental result.
  • The evidence P(E | H), which is the probability that the experimental result would have occurred, assuming that the hypothesis is true.

The prior and posterior probabilities are your subjective assessments of the likelihood of the hypothesis, before and after you saw the experimental result.  Bayes’s Theorem tells you quantitatively how different the posterior probability is from the prior probability — that is, how much you’ve changed your mind  as a result of the new experiment.  Specifically, the theorem says that an experiment that supports a hypothesis (that is, one in which P(E|H) is large) causes you to raise your estimate of the probability of the hypothesis being correct, while an experiment that contradicts the hypothesis (one with a low value of P(E|H)) causes you to reduce it.

Here’s the thing:  in situations where the prior probability was already very large or very small, the posterior probability tends not to change very much unless the evidence is similarly extreme.  For example, say the hypothesis is that quantum physics is not really really wrong.  Before I’d ever heard of Randell Mills, the prior probability of that hypothesis was extremely high (because of the accumulation of nearly a century of experimental evidence).  Let’s put it at 0.999999 (although that’s actually way too low).  Now let’s suppose that Mills has done his experiment really well, so that there’s only a one in ten thousand chance that he’s made a mistake or misinterpreted his results.  That means that the evidence (the probability that quantum physics is OK given his results) is only 0.0001. If you put those numbers into Bayes’s theorem and turn the crank, you find that the posterior probability is still 99%.  That is, even if I have very high confidence in the quality of the experiment, the overwhelming prior evidence means that I’ll still think it’s probably wrong.

A good analogy here is the probability of getting a false positive result in a test for a rare disease: Even if the test has a very low rate of false positives, if the disease is sufficiently rare your positive result has a high probabliity of being false.

All of this is just a long-winded way of talking about the old scientific maxim that extraordinary claims require extraordinary evidence. The nice thing about it is that it’s also quantitative: it tells you how extraordinary the evidence must be.

They laughed at Galileo

Some of the comments on my last post reminded me of a common bit of science mythology, namely the idea that scientific advances come from lone geniuses who overthrow the existing orthodoxy, after years of ridicule and dismissal by the scientific establishment.  There are two things to remember about this idea:

1. To a truly excellent approximation, this never happens.  It’s just not how science generally works. In the modern era, Wegener and continental drift is about the only example I can think of that might count,  but I know essentially nothing about the history of that subject, so I can’t say. It certainly is not true, for instance, that they laughed at Einstein: his ideas were recognized as important and worthy of serious consideration pretty much right away.

2. Even if it does occasionally happen that lone geniuses who are ridiculed by the establishment have great ideas, the converse doesn’t follow: people who are ridiculed by the establishment aren’t necessarily lone geniuses with great ideas.  As Carl Sagan put it a long time ago, they laughed at Galileo, they laughed at the Wright brothers, but they also laughed at Bozo the Clown.

The history of science has very few crazy ideas that turned out to be right, in comparison with the number of crazy ideas that turned out to be crazy.

Hydrinos on the radio

Update (Nov. 21): Mack in the comments has pointed out that Blacklight Power has an mp3 of the radio story I’m talking about on their web site.  Thanks for letting me know.  I just took a second listen.  It’s just as bad as I remembered.

Our local public radio station ran a piece today on the claims by Blacklight Power to have found a way to extract huge amounts of energy from ordinary hydrogen.  I’m sorry to say this, because I’m a big fan of public radio (yes, I’m a member, and you should be too)  but this was a really lousy piece of journalism.

It’s one of the only things I’ve heard on the station that’s even worse than the Virginia Stock Report: that’s just ludicrously pointless, whereas this report was actually harmful.

The people at Blacklight Power claim that there is another energy level of hydrogen, far below the usual ones, and they have a way of causing hydrogen atoms to drop into this lower state, releasing large amounts of energy.  There is an essentially complete consensus among physicists that this is impossible.  For one thing, the existence of this energy state would violate the Heisenberg uncertainty principle, one of the best-tested laws in modern physics.

The radio report adopts the usual pseudo-balanced tone, noting that “some scientists” are skeptical, but it completely fails to make clear that this is a fringe theory and that the overwhelming scientific consensus is that the Blacklight people are wrong.  As anyone who pays attention to science and the media knows all too well, this sort of pseudo-balance is exactly the sort of thing that gives aid and comfort to creationists.  If you’re a science journalist, and you actually believe in science, don’t do this.

A couple of notes:

1. The Blacklight people say that an independent lab at Rowan University has replicated their results.  But the lab uses samples prepared by Blacklight with unknown properties, and all they see is bursts of energy that they don’t know the cause of.  I’m quite prepared to believe there are bursts of energy, but it’s just some chemical reaction having to do with the way the sample was prepared.

2. I’m definitely not accusing Blacklight of deliberate fraud.  On the contrary, as a New York Times article points out,  the company’s adopting a very poor strategy if that’s their goal.  I think they genuinely believe they’re onto something, and they’re just wrong.

Of course, if they’re right and I’m wrong, then (a) they’ll get rich,  (b) I won’t, (c) they’ll solve the world’s energy problems, and (d) we’ll have a major revolution in physics.  I’ll be thrilled by (c) and (d) and perfectly content with (a), which will be well-deserved.  And I can live with (b), which will happen regardless of whether they’re right or wrong.

But anyway, it’s all irrelevant, because they’re not right.  A century of incredibly rigorously tested science says so.

Planets

You’ve probably seen these pictures of planets orbiting other stars.  They’re obviously amazingly cool, but they raise a number of questions:

1. Why does one of them look like the Eye of Sauron?

2. Are these just cool pictures, or do they represent a significant advance in science.

(Full disclosure: I stole #1 from a student.)

Part of the answer to #1 is that the people who made this picture had to choose a more-or-less arbitrary color scheme to show the image, and they happened to choose a red one.  This image is apparently taken using visible wavelengths of light, but the colors shown are”false colors.”  (Extra-geeky aside: I’m not sure, but I think they might have gone with IDL’s “red temperature” color scheme, which happens to be one I’m partial to.)

The black region in the center is where they blocked out the light from the star by putting something in front of the telescope.  If they hadn’t done this, the light from the star would have overwhelmed the faint signal from the planet.  The reddish stuff is mostly radiation from the dust surrounding the star.

What about question #2: how important is this?  This is far from my area of expertise, but I’ll give my impressions anyway.  I think it’s mostly important as a milestone on the way to future discoveries.  Even with these images, the amount that we now know about these particular systems is not all that much greater than what we already knew about the few hundred other star systems where we’ve discovered planets.  In those other systems, even though we don’t see the planets, we can often figure out quite a bit about their locations, masses, orbits, etc., by observing the planets’ effects on their host stars.

But this is still a really big deal, because it’s a step on the way to eventually learning a lot more about these planets.  If you can learn to isolate the light from the planet, as distinct from the starlight, then you can study properties of the planet that can’t be gotten by the earlier kinds of observations.  In particular, if you can take that light and pass it through a spectrometer, you can do chemistry.  You can figure out what the planet is made of, what’s in its atmosphere, etc.

Ultimately, of course, doing chemistry on planetary atmospheres might give us evidence of life out there, which would be the just about the most important bit of science I can imagine.  But even if that doesn’t happen, just being able to study the composition of planets at all will be pretty amazing.

I don’t know how big a step it is to go from the sorts of images we’ve seen this week to spectroscopy, but this is definitely a lot closer than we’ve been before.

Accepted

The paper that I coauthored with Brent Follin (UR undergraduate) and Peter Hyland (Wisconsin grad student turned McGill postdoc) has officially been accepted for publication in Monthly Notices of the Royal Astronomical Society.  I thought it would be, but it’s still nice to make it official.  Congratulations to Brent especially, for becoming a published scientist.

Unlike the last one I posted about, this is a “real” refereed paper.  We decided to submit it to Monthly Notices, not Astronomy and Astrophysics as I wrote in my earlier post, for reasons that aren’t at all interesting.  Monthly Notices is a very good journal, and it has a way cooler name than A&A.

Direct to video

I wrote up a little piece for the proceedings of a conference I went to over the summer. To go into self-deprecating mode, this is the sort of thing that a colleague of mine used to call a direct-to-video paper (this was in the pre-DVD era), because it doesn’t go through the same level of scrutiny as a refereed journal article.

The article has to do with how to separate a map of the polarization of the microwave background into two pieces called the E and B components.  Over the coming years, maps of microwave background polarization are likely to become more and more important in putting constraints on our theories of the early Universe.   A polarization map can be thought of as two maps lying on top of each other.  The B map is considerably weaker than the E map, and it contains information that’s much more useful than the E map, so cleanly splitting the map into the two pieces is going to be very important in extracting science from the data.  This article is an overview of some of the issues involved in this separation.  It contains an extension of some work I did a while ago on finding ways to do this separation more accurately and efficiently.