This New York Times column contains what is no doubt the biggest (in magnitude, if not in importance) numerical error ever to appear in print, and it’s in a quote by a physicist:
For instance, if all the molecules of air in the room where you're sitting would suddenly cross to one side, you would not have any air to breathe. This probability is not zero. It is in the 10 to the minus-25 range.
10-25? It’s more like 10-1000000000000000000000000000 (unless you’re in a room that contains only about 80 air molecules, in which case you’re in trouble anyway). I wonder if a number in a reputable publication has ever been wrong by this large a factor before.
Peter Hyland, Brent Follin, and I just submitted a paper for publication in the journal Astronomy and Astrophysics. You can see it here.
Peter is a postdoc at McGill now, but he was a graduate student at the University of Wisconsin when we did the work. Brent is a rising senior here at U.R.
In this paper, we’ve solved a problem that’s an important part of the construction of a kind of telescope known as an adding interferometer. In an adding interferometer, a bunch of different signals from different antennas are mixed together, resulting in an output signal that is the sum of all of the inputs. We want to be able to extract information about the individual signals (specifically, pairwise correlations between the inputs, if you must know), not just the overall sum. To get this information out, we need to modulate all of the inputs in different ways. Finding the optimal way to do this — that is, the way that results in the smallest errors in the result — turns out to be a tricky problem. We’ve found a general method for finding the solution.
The reason we wanted to solve this problem is that we’re part of a group that’s trying to build an adding interferometer for observing the polarization of the cosmic microwave background radiation. We tested a prototype out at Wisconsin recently. Eventually, a much larger version could map the polarization in great detail, giving us new windows onto the very early Universe.
People like to visualize the expanding Universe as a sort of a stretching rubber sheet. Textbooks and popular cosmology books play up this analogy in a big way. Like most analogies, it’s useful in some ways, but taken too far it can lead to misconceptions. David Hogg and I have written an article in which we try to fight back against some of these mistakes.
The article is about how we should interpret the redshifts of distant objects. Most of the time, redshifts are Doppler shifts, indicating that something is moving away from you. In the cosmological context, though, a lot of people think that you’re not allowed to interpret the redshift in this way. The idea is that galaxies are “really” at rest with respect to the stretching rubber sheet. Since they’re not “really” moving, what we see is something different from a Doppler shift. The point of our article is to rehabilitate the Doppler shift interpretation.
The real reason I care about this is not that I think it matters much what we call the redshift, but because I think that this is a good example of the muddled thinking that the rubber sheet analogy causes. In particular, the analogy provides precisely the wrong intuitions about the nature of space and time in the theory of relativity. If you want to know more specifically what we mean by this, you’ll have to read the article!
To understand the guts of the article, you really need to have studied relativity a fair bit. (Students who took my Physics 479 course should be able to handle it.) Even if you don’t know enough relativity to understand all the technical details, the beginning and end might be interesting and accessible. (Certainly, this paper should be more accessible to non-specialists than the last one I wrote about, which is pretty technical.)
The maps of the microwave background radiation made by the WMAP satellite have been incredibly important in our understanding of the Universe. In most ways, the maps are amazingly consistent with the “standard model” of cosmology. In this model the Universe is made of mostly dark energy and dark matter, and the structure we see around us grew out of tiny density variations imprinted during a period of inflation.
But there are a few puzzles in the WMAP observations, mainly having to do with large-scale patterns in the maps. One of the puzzles is that large-angle correlations in the map are significantly weaker than expected. U.R. rising junior Austin Bourdon and I have written a paper analyzing some possible explanations for this puzzle. Our paper shows that a broad class of possible explanations can actually be ruled out, because they make the problem worse rather than better. The class of explanations we rule out includes some “exotic” models that have been proposed in the literature recently, but it also includes some much more mundane possibilities, such as various non-cosmological contaminants in the data.
In addition to posting it on the web, we’ve submitted the paper for publication in the journal Physical Review D. For any non-scientists who’ve read this far, the next step is that the paper will be sent out for review by experts, who will recommend for or against publication. In the mean time, most people who care about this subject will see it on the web archive.