Teach the controversy

Matt Trawick’s comments on my earlier post on Gregg Easterbrook and “teaching the controversy” are exactly right.  I’ll just add that the seeming reasonableness of the “teach the controversy” position are precisely what makes it so pernicious.

Some people address this with humor.  I sometimes find this approach very funny, but of course it’s of no value at all for actually convincing people who aren’t already convinced.

If you want actual evidence, there’s this survey of biology department heads, which found that essentially none thought that there was a controvery.  I found this at the National Center for Science Education,  which has a lot of resources promoting the teaching of evolution.

Black holes in Slate

The online magazine Slate published a piece today explaining what happens to you if you fall into a black hole, and they had the good sense to consult me on it. Slate’s “Explainer” articles are also podcasted, so if you’re not into reading, you can listen to it instead.

The article pretty much gets things right. One minor quibble: the sentence

In fact, for all but the largest black holes, dissolution would happen before a person even crossed the event horizon, and it would take place in a matter of billionths of a second.

isn’t quite right: the “billionths of a second” number (which I think the author got from me) applies only to quite small black holes, not to “all but the largest” ones, and I think that even for stellar-mass black holes (which are much smaller than “the largest” ones) you’d make it across the horizon before being ripped apart by tidal forces. But those are pretty minor points; the main ideas are all right.

The motivation for this article is the possibility that the Large Hadron Collider will produce black holes. Short answer: It probably won’t, and even if it does, they’ll evaporate quickly rather than gobbling up the Earth. You really don’t need to worry about this.

Slate doesn’t do a lot of science reporting, but when they do it’s often pretty good. A recent article discussed one of the main things the LHC is actually expected to find, namely evidence for the Higgs mechanism. Unlike a lot of writing on the subject, this article actually tried to explain the fact that the Higgs mechanism won’t necessarily manifest itself as just a single new type of particle: it’s likely that something more complicated will be found. If so, that’ll be much more interesting than just finding a single Higgs particle.

Since I’ve been saying nice things about Slate, I want to end with one criticism of their science reporting: they still let Gregg Easterbrook write about science from time to time. Easterbrook’s done some good stuff over the years — in particular he was sharply and rightly critical of the space shuttle and space station long before that became fashionable, and he deserves credit for publicly and forthrightly changing his mind about global warming. (Also, I’ve heard his writing on the NFL is good, but I know nothing about that.) But as far as I’m concerned, anyone who defends the teaching of intelligent design in science classes forfeits all credibility as a science journalist. Yes, I’m intolerant and closed-minded about this. But I’m also right, so it’s OK.

Black holes

The section of my Black Hole FAQ on the observational evidence for black holes is sadly out of date, although the rest of it is still reasonably current. I don’t have any plans to update it, because that sounds altogether too much like work. I’d have to read a lot about things I don’t know much about in order to get up to speed on the subject, and if I’m going to do that, I think it’d be more fun to do it on some new subject rather than revisiting this one.

If I were going to write about this subject, though, I’d certainly want to talk about some recent results published in Nature concerning observations of the black hole at the center of our Galaxy. I think you need to be a subscriber to see the article or Nature’s newsy description of it, but there’s a Science News article that I think is publicly available. (Thanks to my brother Andy for pointing this out to me.)

There’s no way to see past the horizon of a black hole, so the name of the game in this business is to try to see as close as you can to the horizon. If you can resolve details near the horizon, you can look for distorting effects due to gravity, which provide pretty definite evidence that what you’re looking at really is a black hole. If all you can see is stuff that’s 1000 times bigger than the horizon, then it’s hard to tell the difference between a black hole and any other object of the same mass. The authors of this paper have managed to resolve structures that are just about the same size as the horizon.

By the way, one of the authors was a friend of mine in graduate school. Among his lesser-known accomplishments is writing a brochure describing the Berkeley astronomy department to incoming graduate students, in the form of Allen Ginsberg’s “Howl”. I don’t know if copies of it survive, unfortunately.

Biggest error ever

This New York Times column contains what is no doubt the biggest (in magnitude, if not in importance) numerical error ever to appear in print, and it’s in a quote by a physicist:

For instance, if all the molecules of air in the room where you're sitting would suddenly cross to one side, you would not have any air to breathe. This probability is not zero. It is in the 10 to the minus-25 range.

10-25? It’s more like 10-1000000000000000000000000000 (unless you’re in a room that contains only about 80 air molecules, in which case you’re in trouble anyway). I wonder if a number in a reputable publication has ever been wrong by this large a factor before.

Phase shifts

Peter Hyland, Brent Follin, and I just submitted a paper for publication in the journal Astronomy and Astrophysics. You can see it here.

Peter is a postdoc at McGill now, but he was a graduate student at the University of Wisconsin when we did the work. Brent is a rising senior here at U.R.

In this paper, we’ve solved a problem that’s an important part of the construction of a kind of telescope known as an adding interferometer. In an adding interferometer, a bunch of different signals from different antennas are mixed together, resulting in an output signal that is the sum of all of the inputs. We want to be able to extract information about the individual signals (specifically, pairwise correlations between the inputs, if you must know), not just the overall sum. To get this information out, we need to modulate all of the inputs in different ways. Finding the optimal way to do this — that is, the way that results in the smallest errors in the result — turns out to be a tricky problem. We’ve found a general method for finding the solution.

The reason we wanted to solve this problem is that we’re part of a group that’s trying to build an adding interferometer for observing the polarization of the cosmic microwave background radiation. We tested a prototype out at Wisconsin recently. Eventually, a much larger version could map the polarization in great detail, giving us new windows onto the very early Universe.

Down with the rubber sheet

People like to visualize the expanding Universe as a sort of a stretching rubber sheet. Textbooks and popular cosmology books play up this analogy in a big way. Like most analogies, it’s useful in some ways, but taken too far it can lead to misconceptions. David Hogg and I have written an article in which we try to fight back against some of these mistakes.

The article is about how we should interpret the redshifts of distant objects. Most of the time, redshifts are Doppler shifts, indicating that something is moving away from you. In the cosmological context, though, a lot of people think that you’re not allowed to interpret the redshift in this way. The idea is that galaxies are “really” at rest with respect to the stretching rubber sheet. Since they’re not “really” moving, what we see is something different from a Doppler shift. The point of our article is to rehabilitate the Doppler shift interpretation.

The real reason I care about this is not that I think it matters much what we call the redshift, but because I think that this is a good example of the muddled thinking that the rubber sheet analogy causes. In particular, the analogy provides precisely the wrong intuitions about the nature of space and time in the theory of relativity. If you want to know more specifically what we mean by this, you’ll have to read the article!

To understand the guts of the article, you really need to have studied relativity a fair bit. (Students who took my Physics 479 course should be able to handle it.) Even if you don’t know enough relativity to understand all the technical details, the beginning and end might be interesting and accessible. (Certainly, this paper should be more accessible to non-specialists than the last one I wrote about, which is pretty technical.)

Puzzles in the microwave background

The maps of the microwave background radiation made by the WMAP satellite have been incredibly important in our understanding of the Universe.  In most ways, the maps are amazingly consistent with the “standard model” of cosmology.  In this model the Universe is made of mostly dark energy and dark matter, and the structure we see around us grew out of tiny density variations imprinted during a period of inflation.

But there are a few puzzles in the WMAP observations, mainly having to do with large-scale patterns in the maps.  One of the puzzles is that large-angle correlations in the map are significantly weaker than expected.  U.R. rising junior Austin Bourdon and I have written a paper analyzing some possible explanations for this puzzle.  Our paper shows that a broad class of possible explanations can actually be ruled out, because they make the problem worse rather than better.  The class of explanations we rule out includes some “exotic” models that have been proposed in the literature recently, but it also includes some much more mundane possibilities, such as various non-cosmological contaminants in the data.

In addition to posting it on the web, we’ve submitted the paper for publication in the journal Physical Review D.  For any non-scientists who’ve read this far, the next step is that the paper will be sent out for review by experts, who will recommend for or against publication.  In the mean time, most people who care about this subject will see it on the web archive.

All of physics is wrong!

I’m a couple of weeks behind on my podcasts, so I just got around to listening to the episode of This American Life about a guy who’s convinced that Einstein had it all wrong and that his new theory will revolutionize physics.  The segment contains a brief interview with my friend John Baez, who is among many other things the author of the Crackpot Index, a (joking) way of assessing the purveyors of radical alternative theories like this.

By the way, if you’re interested in physics and math at all, you really need to poke around John’s web page.  He loves explaining math and physics to people, and he’s really good at it.   I first got to know him (electronically) when we were both involved in various physics newsgroups back in the ’90’s.  For quite a while, we were both moderators of the group sci.physics.research.  (He probably gets tired of people mentioning this, but he’s also Joan Baez’s cousin.)

The phenomenon of people thinking they have a revolutionary new theory that will overturn all of 20th-century physics is pretty common.  All physicists who work in any area related to relativity know this: we all get self-published articles and books propounding these theories in the mail on a regular basis.  I actually just got one in my mailbox today.

I thought that the piece on This American Life was pretty good.  I’d be interested to know what non-physicists ended up thinking of the various people involved.  I suspect that the physicists interviewed came off as arrogant and intolerant of new ideas.  The problem is that these theories really are invariably complete nonsense, and it’s hard to respond to nonsense in a remotely honest way without sounding like a bit of a jerk.  (Of course, some physicists really are arrogant and intolerant as well, although not John Baez and not, as far as I know, the other physicist interviewed in the piece.)

It’s interesting to note that these theories almost always involve overthrowing relativity, as opposed to other parts of physics.  I think that a big part of the reason for this is the cult of Einstein.  He’s a uniquely mythic figure in physics, and so I guess maybe it’s natural that people want to take him down.

Does data mining supersede science?

A friend of mine asked me what I thought of this article in Wired, which argues that the existence of extremely large data sets is fundamentally changing our approach to data. The thesis is roughly that we no longer need to worry about constructing models or distinguishing between correlation and causation: with the ability to gather and mine ultralarge data sets, we can just measure correlations and be done with it.

This article is certainly entertaining, but fundamentally it’s deeply silly. There’s a nugget of truth in it, which is that large data sets allow people to do data mining – that is, searching for patterns and correlations in data without really knowing what they’re looking for. That’s an approach to data that used to be very rare and is now very common, and it certainly is changing some areas of science. But it’s certainly not the case this somehow replaces the idea of a model or blurs the distinction between correlation and causation. On the contrary, the reason that this sort of thing has been important to science is that it’s a great tool for constructing and refining models, which can then be tested in the usual way.

Two of the specific cases the article cites are actually examples of precisely this process. As I understand it, Google’s search algorithm started in more or less the way it’s described in the article, but over time they refined it by means of a process of old-fashioned model-building. Google doesn’t rank pages by simply counting the number of incoming links; that’s one ingredient in a complicated algorithm (most of which is secret, of course) that is continually refined based on tests performed on the data. Craig Venter’s an even better example: sure, his approach is based on gathering and mining large data sets, but the reason scientists care about those data sets is precisely so that they can use them to construct old-fashioned models and testable hypotheses.

The case of fundamental particle physics, which is also discussed in the article, is quite different. In fact, it’s totally unclear what it has to do with anything else in the article. Fundamental particle physics has arguably been stuck for a couple of decades now because of a lack of data, not an excess of data. There’s not even the remotest connection between the woes of particle physics and the phenomenon of mining large data sets.

Here’s another way to think about the whole business. The distinction between correlation and causation was always a distinction in principle, not a mere practicality. It’s not affected by the quality and quantity of the data.