Puzzle 2

Here’s the second puzzle I found in the August AJP, specifically in this article. (I describe the first puzzle in another post.) I’d never seen this one before, and I found the answer to be very counterintuitive.

Suppose you have a point charge located somewhere near a perfect conductor with zero net charge. We’re doing good old electrostatics here — nothing’s moving. Intro physics students learn what generically happens in this situation: the presence of the point charge induces negative charge on the near side of the conductor and positive charge on the far side (assuming the point charge is positive). Because the negative charge is closer, you get a net attraction:

Here’s the puzzle: is there any arrangement (i.e., any choice of shape for the conductor and location for the point charge) that leads to a repulsive force between the two?

Figure (b) above shows one way to define “repulsive” more precisely, although pretty much any way will do. Suppose that there is some plane (dashed line in the figure) with the point charge on one side and the whole conductor on the other side. Is there any situation in which the force on the point charge points away from the plane?)

Physics puzzles

The August 2011 issue of the American Journal of Physics (paywalled, I assume) has two articles about nice physics puzzles. The statement of these puzzles should be understandable to people who know university-level introductory physics, but the solutions are hard.

Here’s the first one. I’ll put the second in another post.

Suppose you have  a strangely-shaped perfectly reflecting cavity like the one in Figure (a) below. The surface consists of parts of two ellipsoids and one sphere. The ellipsoids have foci A and B, and the sphere’s center is at B. Any light ray leaving point A hits part of an ellipsoid and ends up at B. Some light rays leaving B hit ellipsoids and end up at A, while others hit the sphere and go back to B.

Now put objects (blackbodies) at points A and B. They both radiate. All of the radiation from A heats up B, but only some of the radiation from B heats up A. So if the two bodies start out at the same temperature, there’ll be a net energy flow from A to B. But that violates the second law of thermodynamics. What’s going on?

In real life, such a system wouldn’t obey ray optics — the radiation would diffract around, eventually filling the volume of the cavity. Also, the walls wouldn’t really be perfectly reflective, so they’d heat up and radiate themselves. But I don’t think those considerations count as resolutions of the paradox: we can certainly imagine a world in which ray optics works and reflection work perfectly, and the second law should hold in such a world.

Someone told me this puzzle back when I was in grad school, and it bothered me for a while. Eventually, I think I hit on the same answer as the one in the AJP article.

Pioneer anomaly resolved?

It looks like the Pioneer anomaly may have been explained.

The Pioneer anomaly is the observed fact that Pioneer 10 and 11 are decelerating as they fly away from the Sun by an amount that is very, very slightly more than you’d expect from good old F = ma. The size of the anomaly is incredibly tiny — less than 10-9 m/s2. It seems incredible that such a small effect is even measurable, but over long times even very tiny effects add up. According to Wikipedia (I haven’t checked their numbers, but they’re always right about this sort of thing),

If the positions of the spacecraft are predicted one year in advance based on measured velocity and known forces (mostly gravity), they are actually found to be some 400 km closer to the sun at the end of the year.

That 400 km is due to the anomalous acceleration.

Some people hypothesized that this effect might be due to some new force, or to a deviation of the gravitational force from standard physics. The latter idea fits in with the idea, held by some physicists, that the phenomena of dark energy and (possibly) dark matter are actually signs that we don’t understand gravity.

But when an effect is so tiny, it’s very hard to be sure that there’s not some more mundane cause. The recent analysis, which is accepted for publication in Physical Review Letters, seems to point toward a mundane source.

The most plausible mundane explanation is that energy is being radiated away by the radioisotope thermal generators. That energy wouldn’t exert any net force on the craft if it were radiated equally in all directions, but if some of it is bouncing off of the craft (specifically, a big antenna), then there would be a net force.

As I understand it, the recent analysis shows that the acceleration is decaying away with time in the way you’d expect if this explanation is correct. Previous analyses had suggested the acceleration was constant in time, which was regarded as evidence against the mundane explanation.

I always thought it was pretty unlikely that the Pioneer anomaly was evidence of exotic new physics, so this doesn’t surprise me too much. It’d be nice if it were pointing us toward some revolutionary new theory, but such things don’t come along very often.

James Webb Space Telescope in danger of cancellation

Pretty much every astronomy blogger is writing about the prospect of the James Webb Space Telescope being canceled. If you want to pick just one to read, this one‘s pretty good.

In particular, if you haven’t seen the video at the end, watch it.

The problems with JWST are real and significant, but I’m still strongly in favor of increasing the awesome.

Goodbye to the shuttle

In honor of the last Space Shuttle flight, the journal Science has a retrospective by Dan Charles (behind a paywall, I think) on the science achievements of the shuttle. People who’ve paid a lot of attention to science and the space program over the years will already know the main points, but it’s worth a read for those who haven’t. The most useful part is the timeline (also paywalled, I assume).

To tease out the scientific contributions of the shuttle, Science has grouped the program's 134 missions into five categories. (The sixth, and largest, category is those missions with little or no scientific activity.) By frequency, the exploration of microgravity leads the way, with a substantial amount of such research aboard 45 missions. In second place are major observations of Earth or the heavens (12 missions), followed by the launching of large scientific instruments (seven missions), repairs and upgrades to the Hubble telescope (five missions), and research on the effects of the external space environment (three missions).

The shuttle did launch three “great observatories” — Hubble Space Telescope, Chandra (X-rays), Compton GRO (gamma-rays), which have been amazing tools. As Charles puts it,

All the instruments have led to stunning scientific advances€” but all could have been launched on crewless rockets.

The shuttle was a crazily overpriced way to do this science.

One thing the shuttle did do that couldn’t have been done with crewless rockets: service, repair, and upgrade Hubble. Economically, this still isn’t worth it: you’d be better off just building and launching a new telescope for the cost of the servicing missions. Charles notes a counterargument:

Critics of crewed space flight point out that NASA could have built and launched an entirely new space telescope for the price of the repair missions. But Grunsfeld says that's unrealistic; it would have taken longer to build and launch a second-generation Hubble, for one thing, and there's no guarantee the project would have been completed.

If I understand it correctly, this is a claim that, due to the irrational way we allocate funds, it would have been politically unfeasible to do the science in a more efficient way. That may be true, for all I know.

Other than the great observatories, the main impression I get from this piece is how little science has come out of the shuttle and the  international space station. Estimates of the cost of the ISS vary, but it’s at least $100 billion, and the cost of the shuttle program is something like $170 billion. There’s overlap in those two numbers — the first one includes the cost of shuttle flights to service the ISS — so you can’t just add them up. But anyway, it means that the combined cost of the programs is at least 60 times the cost to the US government of the Human Genome Project. For that kind of money, we should be opening up vast new scientific vistas, but they’re nowhere to be found.

Of course, all this says is that the shuttle and space station are failures when viewed as science projects. If we acknowledge that science is not and never has been the primary purpose of human space flight, then this isn’t the right yardstick to use in measuring the programs’ success.

If not science, what is the point of the human space flight program, then? The three I’ve heard people talk about are

  1. the intrinsic awesomeness of exploration (“going where no one has gone before”),
  2. inspiring the next generation of scientists and engineers,
  3. somehow doing something nice for international relations.

I don’t buy the first two. What we’ve done for the past 40 years in human space flight is not exploration in any meaningful sense, and I see no evidence that it’s inspiring. I don’t have any knowledge that lets me evaluate the last one, but I’ve never seen any argument that persuades me that our relations with Russia (or anywhere else) are better than they otherwise would be, to the tune of hundreds of billions of dollars, because of the ISS.

The comic Abstruse Goose reacts with sadness to the end of the shuttle program:

But the real sadness occurred at the beginning of the shuttle program, when we decided to put all our effort into shipping stuff back and forth to low-Earth orbit, so that we completely forgot how to do real exploration.

Error-correcting in science

Carl Zimmer has a column in Sunday’s NY Times about the ways in which science does and does not succeed in self-correcting when incorrect results get published:

Scientists can certainly point with pride to many self-corrections, but science is not like an iPhone; it does not instantly auto-correct. As a series of controversies over the past few months have demonstrated, science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan's words would suggest. Science runs forward better than it does backward.

One of the controversies inspiring this column is the scientist who failed to replicate a controversial clairvoyance study and were unable to publish their results, because the journals only publish novel work, not straight-up replications. I offered my opinion about this a while ago: I think that this is a bad journal policy, and that in general attempts to replicate and check past work should be valued at least a bit more than they currently are.

On the other hand, it’s possible to make way, way too much of this problem. The mere fact that people aren’t constantly doing and publishing precise replications of previous work doesn’t mean that previous work isn’t checked. I don’t know much about other branches of science, but in astrophysics the usual way this happens is that, when someone wants to check a piece of work, they try to do a study that goes beyond the earlier work, trying to replicate it in passing but breaking some new ground as well. (A friend of mine pointed this out to me off-line after my previous post. I knew that this was true, but in hindsight I should have stressed its importance in that post.)

For instance, one group claims to have shown that the fine-structure constant has varied over time, then another group does an analysis, arguably more sensitive than the original, which sees no such effect. If the second group can successfully argue that their method is better, and would have found the originally-claimed effect had that effect been real, then they have in effect refuted that claim.

(I should add that in this particular example I don’t have enough expertise to say that the  second group’s analysis really is so much better as to supersede the original; this is just meant as an example of the process.)

The main problem with Zimmer’s article is a silly emphasis on the unimportant procedural matter of whether an incorrect claim in a published paper has been formally retracted in print. This is sort of a refrain of the article:

For now, the original paper has not been retracted; the results still stand.

[T]he journal has a longstanding policy of not publishing replication studies … As a result, the original study stands.

Ms. Mikovits declared that a retraction would be "premature” … Once again, the result still stands.

It’s completely understandable that a journalist would place a high value on formal, published retraction of error, but as far as the actual progress of science is concerned it’s a red herring. Incorrect results get published all the time and are rarely formally retracted. What happens instead is that the scientific community goes on, learns new things, and eventually realizes that the original result was wrong. At that point, it just dies away from lack of attention.

One example that come to mindis the Valentine’s Day monopole.  This was a detection of a magnetic monopole, which would be a Nobel-Prize-worthy discovery if verified. It was never verified. As far as I know, the original paper wasn’t formally retracted, and I’m not even sure whether it’s known what went wrong with the original experiment, but it doesn’t matter: subsequent work has shown convincingly that this experiment was in error, and nobody believes it anymore.

(By the way, this is also an illustration of the fact that getting something wrong doesn’t mean you’re a bad scientist. This was an excellent experiment done by an excellent physicist. If you’re going to push to do new and important science, you’re going to run the risk of getting things wrong.)

Of course, there is some value in a formal retraction, especially in cases of actual misconduct. And when an incorrect result is widely publicized in the general press, it would be nice if the fact that it was wrong were also widely publicized. But a retraction in the original scholarly journal would be of precisely no help with the latter problem — the media outlets that widely touted the original result would not do the same for the retraction.

Quantum silliness from the BBC

Since I regularly express my envy at the quality of science journalism in England, I thought I should mention that the BBC has a particularly silly piece about quantum mechanics on its web site.

The experiment described in the article looks perfectly nice, but the results definitely do not “bend” any rules of quantum physics, nor does they “pull back the veil” on quantum reality. The results found by the experiment are precisely what we “knew” they must be all along — that is, they’re exactly what is predicted by standard quantum mechanics. Of course, that doesn’t mean the experiment isn’t worth doing — it’s always nice to test our theories, and learning better techniques for manipulating quantum systems is a great thing to do. But there are precisely no philosophical implications about the nature of reality here. As John Baez said a long time ago

Newspapers tend to report the results of these fundamentals-of-QM experiments as if they were shocking and inexplicable threats to known physics. It's gotten to the point where whenever I read the introduction to one of these articles, I close my eyes and predict what the reported results will be, and I'm always right. The results are always consistent with simply taking quantum mechanics at face value.

The new result fits perfectly into this category.

In the Dark

I haven’t posted much lately, so I thought I’d at least put in a plug for another blog I like: Peter Coles’s In the Dark. Peter Coles is a very well-respected cosmologist, with a lot of original and thought-provoking things to say on a bunch of subjects, including the Bayesian approach to probabilities (one of my obsessions), media coverage of science, physics puzzles, and a lot more. (I’ve just mentioned the science topics, but he writes about non-science stuff too.)

Science is all about replication of results, right?

Remember that paper published a while ago claiming evidence of precognition? I didn’t say anything about it, because I didn’t have anything much to say. Extraordinary claims require extraordinary evidence, Bayesian priors, and all that. You’ve heard it all before.

Here’s the thing. The right thing to do in this situation, as everyone knows, is for other scientists to try to replicate the result. Well, some did just that, tried to replicate the result, and couldn’t. It’s nice to see the system working, isn’t it? Well, it would be nice, except that the journal that published the original paper rejected their article, because they don’t publish replications of previous work. Ben Goldacre’s got the goods.

This is a structural problem with the way science is funded, disseminated, and rewarded. Even though everyone agrees that replication is essential in situations like these, it’s practically impossible to get a grant to merely replicate previous work, or to publish the results once you’ve done it. I don’t know what to do about that.

By the way, I’ve said this before, but let me say it again. In case you don’t know about Ben Goldacre, the great Guardian science writer and blogger, you should. He’s a national treasure. (Not my nation, unfortunately, but a national treasure nonetheless.)

Safe Zone

A couple of weeks ago, I finally got around to participating in one of the workshops for the University of Richmond’s Safe Zone program. For those who don’t know,

Safe Zone educates members of the University community about lesbian, gay, bisexual, and transgender (LGBT) issues to create a network of allies who, together with members of the LGBT community, work to create a community of safety and full inclusion for all its members.

I am now entitled to (and do!) display a sticker on my office door, letting people know that I’m “supportive and affirming of the LGBT community.” I urge UR faculty, staff, and students to do the same. Hard data are hard to come by, of course, but based on anecdotal information that seems credible to me, UR’s campus culture is not as comfortable a place for LGBT students as it could and should be. This is a little thing that each of us can do to show where we stand.

But I have to say that there’s one thing about this program that doesn’t feel quite right to me: the name “Safe Zone.” To be specific, the word “safe” seems to me to set the bar far too low.

It was clear from the workshop that the goal is to be active and committed advocates, whereas to me the word “safe” suggests merely promising to be harmless.

I don’t think this is a minor semantic quibble. I first encountered the term “Safe Zone” a number of years ago, when I was visiting another university and saw a few stickers on people’s office doors. My immediate conclusion was that that university must be a terrible place for the LGBT community, since it appeared that well over 90% of the offices on campus were “unsafe” for them.

I don’t think that the University of Richmond should be, inadvertently or otherwise, conveying a message that only a few places on campus are “safe” for LGBT community members. For one thing, I fervently hope and believe that that message is inaccurate. I know that there is a range of beliefs and attitudes about sexuality and gender issues, but at least in the parts of campus I know about (faculty and administrative offices, primarily), I think that a commitment to support of LGBT people is by far the norm.

More importantly, we should make clear that “safety” (at the very least!) for LGBT people is expected of all community members. Any faculty or staff member who makes LGBT students feel unsafe is guilty of professional misconduct. Walking around campus and seeing that a small fraction of offices are “safe” conveys precisely the opposite message, namely that we as an institution find “unsafeness” to be an acceptable and even normal state of affairs.

I’d like to suggest a simple name change. Those who have made the Safe Zone pledge should be called something affirmative, such as “allies” or some similar term. That seems to me to be better in every way: it’s more accurate, it conveys a more positive and supportive message, and it avoids the pitfall of defining “unsafeness” as the norm.

Just to make 100% sure there’s no confusion about this, let me repeat that I strongly support the Safe Zone program.  I think it’s a great thing to do, and if you’re a colleague or student of mine, I urge you to take part! I just think that a simple name change would make the program stronger.