The passive voice can be used

I’m trying to be pretty rigorous in evaluating my students’ writing, but one thing I’m not telling them is to avoid the passive voice. I think that the avoid-the-passive rule, despite its popularity among writing teachers and in usage guides, is pretty much a superstition. It’s a marginally more worthwhile rule than other superstitions such as avoiding split infinitives, but only marginally.

Lots of people disagree with me about this, as I found when I participated in a big discussion of this on my brother’s Facebook wall recently. (I also seem to end up discussing the Oxford comma with surprising frequency on Facebook. No doubt once this information gets out I’ll be deluged with friend requests.) So I was glad to see this spirited defense of the passive by linguist Geoffrey Pullum recently.

Pullum also wrote a blistering takedown of Strunk and White a while back. I had mixed feelings about that one, but I think he’s got things write in the passive-voice piece.

Scientists are often taught to write in the passive in order to de-emphasize the role of the experimenter. You’re supposed to say “The samples were collected” instead of “We collected the samples,” because it’s not supposed to matter who did the collecting. Personally, I think this is another superstition, roughly equal in silliness to the no-passive-voice superstition. Ignore them both, and write whatever sounds best. (In most cases like the above,  I think that the active-voice construction ends up sounding more natural.)

I have heard one cogent argument in favor of teaching the avoid-the-passive rule: even if write-whatever-sounds-better is a superior rule, it’s not one that most inexperienced writers are capable of following. They need firm rules, even if those rules are just heuristics which they’ll later outgrow.

There’s some truth in this, and as long as we’re all clear that avoid-the-passive is a sometimes useful heuristic, as opposed to a firm rule, I have no major objections. But there are so many exceptions to this rule that I’m not convinced it’s all that good even as a heuristic. As Pullum points out, Orwell’s essay warning against the passive is itself 20% passive.

At least in the case of my students, overuse of the passive doesn’t seem like one of the top priorities to address. If I’m looking for heuristics to help them improve their writing, this one wouldn’t be near the top of the list. Here are two better ones that come immediately to mind:

  • Cut out all intensifiers (“very”, “extremely”, etc.), unless you have a good reason for them.
  • If you feel the need to include a qualifier like “As I mentioned earlier,” then the sentence in question probably doesn’t need to be there at all.
(Rules like these can be hard to follow. I initially wrote “a very good reason” in the first one, for instance.)

Addenda: Libby rightly points out “In other words” as a marker for the sort of thing I’m talking about in the second “rule.” A couple more I’d add to the list:

  • Don’t use fancy words for their own sake, especially if you’re in any doubt about the word’s precise meaning. Plain, familiar words are just fine.
  • Read your work aloud. Often, a sentence that look OK on the page sounds unnatural when you hear it.
  • If you’ve got a really long paragraph (at a rough guess, greater than about 200 words), chances are that you’ve muddled together several different ideas, each of which deserves its own paragraph.
One final point, emphasized by Pullum: An additional problem with teaching the avoid-the-passive rule is that most people don’t find grammar intuitive and don’t even recognize passive constructions correctly a lot of the time. (This is the place where his takedown of Strunk and White is most compelling. Even they get it wrong most of the time.) The avoid-the-passive rule seems to be meant as a simple proxy for more difficult rules, but it’s not even simple for most people in the target group.

A scientist teaching writing

I’m teaching a first-year seminar this semester, which is quite a different sort of course for me. We’re at the halfway point in the semester, which seems like as good a time as any to reflect a bit on how it’s going.

First, some background. First-year seminars replaced the Core course we used to require of all entering students (a change I strongly supported, by the way). Under the current system, all students have to take a first-year seminar each semester of their first year. These courses cover a wide variety of topics, based on faculty interest and expertise, but they’re all supposed to have certain things in common. Perhaps the most important of these is that all seminars are “writing-intensive.”

My seminar is called “Space is Big.” It’s about how ideas about the size of the Universe have changed over time, focusing on three periods: the Copernican revolution, the early 20th century, and the present.

So what do I have to say at the halfway point?

First, reading and grading essays takes a lot of time. It’s much harder than grading problem sets and exams. This is not a surprise, of course. The most time-consuming part is writing comments on each essay. I find pretty detailed comments are necessary, both to clarify my own thinking about why I’m giving the grade I am and more importantly to give the student guidance for improvement.

There are some teaching duties we scientists have that others don’t (designing labs, for instance). When we feel like complaining about that sort of thing, we should remember how much easier we generally have it when it comes to grading. (Not that we’ll stop complaining. Complaining is one of the great joys of life. It sets us apart from the animals.)

Based on my experience so far, the main problems our students have with their writing involve organization and structure: they’re pretty good at the individual sentence level, but they sometimes have trouble combining those sentences in a coherent way. The most common serious flaw in my students’ essays is the long, rambling paragraph that contains lots of true facts in no discernible order. Other problems include unnecessary repetitiveness and puffed-up, diffuse phrases that add no meaning. (I should add that not all of my students have these problems: some of them write quite well.)

This should be reassuring to my science colleagues, some of whom are convinced that they’re not qualified to teach writing because they don’t know rules of grammar and usage. True, some science professors do get confused about grammar and usage in ways that you wouldn’t expect to see from, say, an English or history professor. (Present company, excluded, of course! I’m a bit of a usage geek, and while I have many flaws, you’re not likely to catch me in a comma splice.) But based on my experience, the main sort of help students need with their writing concern structuring an argument clearly, logically, and concisely, not misplacing apostrophes.

We as scientists are perfectly qualified to teach and evaluate writing in this sense. We spend huge amounts of our time writing and evaluating other people’s writing (papers, grant proposals, etc.). We wouldn’t have gotten anywhere in science without skill in both these areas. That’s not to say that teaching and evaluating writing is easy — for scientists or anyone else. But we can do it.

And by the way, for those who are concerned about gaps in their ability to teach grammar and usage (or other aspects of writing), the University’s Writing Center has good support for faculty and students.

This course is far more work than a “normal” course of the sort I’m used to, but on the whole it’s been fun, mostly because I get to read and think about familiar subjects in a new way. I urge my science colleagues not to be scared to try it out.

Doom from the sky

I was on the Channel 8 Richmond TV news last night. You can see the video here.

Since I’m apparently the only astrophysicist in the greater Richmond area, I sometimes get asked to comment on space stories. I think this is my first time on this channel; I’ve been on Channel 6 from time to time.

In this case, they wanted to talk about the UARS satellite, which is going to reenter the atmosphere in the next couple of weeks. Some pieces are predicted to survive reentry and reach the ground.

Two disappointing things about this piece:

  1. The reporter says that there’s a 1 in 3200 chance of “being hit” by the debris. This is NASA’s estimate of the chance of someone, somewhere in the world being hit. The chance of any given person (such as you) being hit is 7 billion times smaller — i.e., one in 20 trillion. I stated that in the interview, but they chose not to use that part. The way they stated it is extremely misleading.
  2. The Santa Claus line is mine.

 

 

Music of the spheres

I’m teaching about the Copernican revolution in my first-year seminar these days. Before getting to Copernicus, we’re taking a look at what people thought about the motions of the planets at earlier times. We’re particularly focusing on the ancient Greeks, since that’s largely what Copernicus and pals were responding to.

Most astronomy textbooks talk about the Ptolemaic system, with its epicyles, deferents, and the like. But before there was Ptolemy, people like Eudoxus came up with pretty good, detailed models to explain planetary motion based on the idea that all of the heavenly bodies were attached to nested, concentric spheres. This “homocentric sphere” picture was very important, largely because it’s the one Aristotle championed.

To explain the complicated motions of the planets in this model, you need to use multiple spheres, all rotating about different axes at different rates. You’d think there’d be nice animations out there on the Web somewhere to show how this all worked, but I couldn’t find any, so I made my own:

More images and detailed explanations here.

Puzzle 2

Here’s the second puzzle I found in the August AJP, specifically in this article. (I describe the first puzzle in another post.) I’d never seen this one before, and I found the answer to be very counterintuitive.

Suppose you have a point charge located somewhere near a perfect conductor with zero net charge. We’re doing good old electrostatics here — nothing’s moving. Intro physics students learn what generically happens in this situation: the presence of the point charge induces negative charge on the near side of the conductor and positive charge on the far side (assuming the point charge is positive). Because the negative charge is closer, you get a net attraction:

Here’s the puzzle: is there any arrangement (i.e., any choice of shape for the conductor and location for the point charge) that leads to a repulsive force between the two?

Figure (b) above shows one way to define “repulsive” more precisely, although pretty much any way will do. Suppose that there is some plane (dashed line in the figure) with the point charge on one side and the whole conductor on the other side. Is there any situation in which the force on the point charge points away from the plane?)

Physics puzzles

The August 2011 issue of the American Journal of Physics (paywalled, I assume) has two articles about nice physics puzzles. The statement of these puzzles should be understandable to people who know university-level introductory physics, but the solutions are hard.

Here’s the first one. I’ll put the second in another post.

Suppose you have  a strangely-shaped perfectly reflecting cavity like the one in Figure (a) below. The surface consists of parts of two ellipsoids and one sphere. The ellipsoids have foci A and B, and the sphere’s center is at B. Any light ray leaving point A hits part of an ellipsoid and ends up at B. Some light rays leaving B hit ellipsoids and end up at A, while others hit the sphere and go back to B.

Now put objects (blackbodies) at points A and B. They both radiate. All of the radiation from A heats up B, but only some of the radiation from B heats up A. So if the two bodies start out at the same temperature, there’ll be a net energy flow from A to B. But that violates the second law of thermodynamics. What’s going on?

In real life, such a system wouldn’t obey ray optics — the radiation would diffract around, eventually filling the volume of the cavity. Also, the walls wouldn’t really be perfectly reflective, so they’d heat up and radiate themselves. But I don’t think those considerations count as resolutions of the paradox: we can certainly imagine a world in which ray optics works and reflection work perfectly, and the second law should hold in such a world.

Someone told me this puzzle back when I was in grad school, and it bothered me for a while. Eventually, I think I hit on the same answer as the one in the AJP article.

Pioneer anomaly resolved?

It looks like the Pioneer anomaly may have been explained.

The Pioneer anomaly is the observed fact that Pioneer 10 and 11 are decelerating as they fly away from the Sun by an amount that is very, very slightly more than you’d expect from good old F = ma. The size of the anomaly is incredibly tiny — less than 10-9 m/s2. It seems incredible that such a small effect is even measurable, but over long times even very tiny effects add up. According to Wikipedia (I haven’t checked their numbers, but they’re always right about this sort of thing),

If the positions of the spacecraft are predicted one year in advance based on measured velocity and known forces (mostly gravity), they are actually found to be some 400 km closer to the sun at the end of the year.

That 400 km is due to the anomalous acceleration.

Some people hypothesized that this effect might be due to some new force, or to a deviation of the gravitational force from standard physics. The latter idea fits in with the idea, held by some physicists, that the phenomena of dark energy and (possibly) dark matter are actually signs that we don’t understand gravity.

But when an effect is so tiny, it’s very hard to be sure that there’s not some more mundane cause. The recent analysis, which is accepted for publication in Physical Review Letters, seems to point toward a mundane source.

The most plausible mundane explanation is that energy is being radiated away by the radioisotope thermal generators. That energy wouldn’t exert any net force on the craft if it were radiated equally in all directions, but if some of it is bouncing off of the craft (specifically, a big antenna), then there would be a net force.

As I understand it, the recent analysis shows that the acceleration is decaying away with time in the way you’d expect if this explanation is correct. Previous analyses had suggested the acceleration was constant in time, which was regarded as evidence against the mundane explanation.

I always thought it was pretty unlikely that the Pioneer anomaly was evidence of exotic new physics, so this doesn’t surprise me too much. It’d be nice if it were pointing us toward some revolutionary new theory, but such things don’t come along very often.

James Webb Space Telescope in danger of cancellation

Pretty much every astronomy blogger is writing about the prospect of the James Webb Space Telescope being canceled. If you want to pick just one to read, this one‘s pretty good.

In particular, if you haven’t seen the video at the end, watch it.

The problems with JWST are real and significant, but I’m still strongly in favor of increasing the awesome.

Goodbye to the shuttle

In honor of the last Space Shuttle flight, the journal Science has a retrospective by Dan Charles (behind a paywall, I think) on the science achievements of the shuttle. People who’ve paid a lot of attention to science and the space program over the years will already know the main points, but it’s worth a read for those who haven’t. The most useful part is the timeline (also paywalled, I assume).

To tease out the scientific contributions of the shuttle, Science has grouped the program's 134 missions into five categories. (The sixth, and largest, category is those missions with little or no scientific activity.) By frequency, the exploration of microgravity leads the way, with a substantial amount of such research aboard 45 missions. In second place are major observations of Earth or the heavens (12 missions), followed by the launching of large scientific instruments (seven missions), repairs and upgrades to the Hubble telescope (five missions), and research on the effects of the external space environment (three missions).

The shuttle did launch three “great observatories” — Hubble Space Telescope, Chandra (X-rays), Compton GRO (gamma-rays), which have been amazing tools. As Charles puts it,

All the instruments have led to stunning scientific advances€” but all could have been launched on crewless rockets.

The shuttle was a crazily overpriced way to do this science.

One thing the shuttle did do that couldn’t have been done with crewless rockets: service, repair, and upgrade Hubble. Economically, this still isn’t worth it: you’d be better off just building and launching a new telescope for the cost of the servicing missions. Charles notes a counterargument:

Critics of crewed space flight point out that NASA could have built and launched an entirely new space telescope for the price of the repair missions. But Grunsfeld says that's unrealistic; it would have taken longer to build and launch a second-generation Hubble, for one thing, and there's no guarantee the project would have been completed.

If I understand it correctly, this is a claim that, due to the irrational way we allocate funds, it would have been politically unfeasible to do the science in a more efficient way. That may be true, for all I know.

Other than the great observatories, the main impression I get from this piece is how little science has come out of the shuttle and the  international space station. Estimates of the cost of the ISS vary, but it’s at least $100 billion, and the cost of the shuttle program is something like $170 billion. There’s overlap in those two numbers — the first one includes the cost of shuttle flights to service the ISS — so you can’t just add them up. But anyway, it means that the combined cost of the programs is at least 60 times the cost to the US government of the Human Genome Project. For that kind of money, we should be opening up vast new scientific vistas, but they’re nowhere to be found.

Of course, all this says is that the shuttle and space station are failures when viewed as science projects. If we acknowledge that science is not and never has been the primary purpose of human space flight, then this isn’t the right yardstick to use in measuring the programs’ success.

If not science, what is the point of the human space flight program, then? The three I’ve heard people talk about are

  1. the intrinsic awesomeness of exploration (“going where no one has gone before”),
  2. inspiring the next generation of scientists and engineers,
  3. somehow doing something nice for international relations.

I don’t buy the first two. What we’ve done for the past 40 years in human space flight is not exploration in any meaningful sense, and I see no evidence that it’s inspiring. I don’t have any knowledge that lets me evaluate the last one, but I’ve never seen any argument that persuades me that our relations with Russia (or anywhere else) are better than they otherwise would be, to the tune of hundreds of billions of dollars, because of the ISS.

The comic Abstruse Goose reacts with sadness to the end of the shuttle program:

But the real sadness occurred at the beginning of the shuttle program, when we decided to put all our effort into shipping stuff back and forth to low-Earth orbit, so that we completely forgot how to do real exploration.

Error-correcting in science

Carl Zimmer has a column in Sunday’s NY Times about the ways in which science does and does not succeed in self-correcting when incorrect results get published:

Scientists can certainly point with pride to many self-corrections, but science is not like an iPhone; it does not instantly auto-correct. As a series of controversies over the past few months have demonstrated, science fixes its mistakes more slowly, more fitfully and with more difficulty than Sagan's words would suggest. Science runs forward better than it does backward.

One of the controversies inspiring this column is the scientist who failed to replicate a controversial clairvoyance study and were unable to publish their results, because the journals only publish novel work, not straight-up replications. I offered my opinion about this a while ago: I think that this is a bad journal policy, and that in general attempts to replicate and check past work should be valued at least a bit more than they currently are.

On the other hand, it’s possible to make way, way too much of this problem. The mere fact that people aren’t constantly doing and publishing precise replications of previous work doesn’t mean that previous work isn’t checked. I don’t know much about other branches of science, but in astrophysics the usual way this happens is that, when someone wants to check a piece of work, they try to do a study that goes beyond the earlier work, trying to replicate it in passing but breaking some new ground as well. (A friend of mine pointed this out to me off-line after my previous post. I knew that this was true, but in hindsight I should have stressed its importance in that post.)

For instance, one group claims to have shown that the fine-structure constant has varied over time, then another group does an analysis, arguably more sensitive than the original, which sees no such effect. If the second group can successfully argue that their method is better, and would have found the originally-claimed effect had that effect been real, then they have in effect refuted that claim.

(I should add that in this particular example I don’t have enough expertise to say that the  second group’s analysis really is so much better as to supersede the original; this is just meant as an example of the process.)

The main problem with Zimmer’s article is a silly emphasis on the unimportant procedural matter of whether an incorrect claim in a published paper has been formally retracted in print. This is sort of a refrain of the article:

For now, the original paper has not been retracted; the results still stand.

[T]he journal has a longstanding policy of not publishing replication studies … As a result, the original study stands.

Ms. Mikovits declared that a retraction would be "premature” … Once again, the result still stands.

It’s completely understandable that a journalist would place a high value on formal, published retraction of error, but as far as the actual progress of science is concerned it’s a red herring. Incorrect results get published all the time and are rarely formally retracted. What happens instead is that the scientific community goes on, learns new things, and eventually realizes that the original result was wrong. At that point, it just dies away from lack of attention.

One example that come to mindis the Valentine’s Day monopole.  This was a detection of a magnetic monopole, which would be a Nobel-Prize-worthy discovery if verified. It was never verified. As far as I know, the original paper wasn’t formally retracted, and I’m not even sure whether it’s known what went wrong with the original experiment, but it doesn’t matter: subsequent work has shown convincingly that this experiment was in error, and nobody believes it anymore.

(By the way, this is also an illustration of the fact that getting something wrong doesn’t mean you’re a bad scientist. This was an excellent experiment done by an excellent physicist. If you’re going to push to do new and important science, you’re going to run the risk of getting things wrong.)

Of course, there is some value in a formal retraction, especially in cases of actual misconduct. And when an incorrect result is widely publicized in the general press, it would be nice if the fact that it was wrong were also widely publicized. But a retraction in the original scholarly journal would be of precisely no help with the latter problem — the media outlets that widely touted the original result would not do the same for the retraction.