My brother Andy has a guest post on Andrew Revkin’s blog at the NY Times. There’s an audio slide show narrated by Andy as well. Check it out.
Bunn family bragging
News release from the American Society of Hematology about my father:
Past ASH president H. Franklin Bunn, MD, of the Brigham and Women's Hospital in Boston, will be presented with the Wallace H. Coulter Award for Lifetime Achievement in Hematology, which was established in 2007. This award, named for Wallace Henry Coulter, a prolific inventor who made important contributions to hematology and to ASH, is bestowed on an individual who has demonstrated a lifetime commitment and made outstanding contributions to hematology, and who has made a significant impact on education, research, and/or practice. Dr. Bunn will receive the award for his leadership in advancing the field of hematology and hematology research for more than 40 years. Throughout his career, Dr. Bunn's research has represented only a part of his commitment to the field. He has served on many National Institutes of Health advisory groups and councils and as an Associate Editor of Blood, a reviewer and editor of publications about hemoglobin and hemoglobin disorders, and an author of two textbooks. Most importantly, he has been an inspiring teacher of hematology to medical students and a masterful mentor of fellows and junior faculty.
Small steps and giant leaps
As everyone must know by now, today is the 40th anniversary of Neil Armstrong and Buzz Aldrin walking on the Moon. This is a natural occasion for people like Tom Wolfe and many others to think about the future of human space flight. So here’s one astrophysicist’s opinion.
There’s one good reason to consider trying to send humans to Mars: It’d be awesome to send humans to Mars. The idea of people going to places people have never been before is incredibly exciting and inspiring. If we do it, we should do it for its own intrinsic awesomeness.
Here are some bad reasons to send humans to Mars:
Bad reason 1. To do science. Sending fragile human bodies to Mars (and then of course having to bring them back) is about the least cost-effective way imaginable to gain scientific data bout Mars. Wolfe says
"Why not send robots?" is a common refrain. And once more it is the late Wernher von Braun who comes up with the rejoinder. One of the things he most enjoyed saying was that there is no computerized explorer in the world with more than a tiny fraction of the power of a chemical analog computer known as the human brain, which is easily reproduced by unskilled labor.
but, with all due respect to both Wolfe and von Braun, this is nonsense. The point is that we don’t have to choose between sending robots and having human brains in charge: we can and do send robots that are controlled by humans on Earth.
Bad reason 2. To give us an alternative for when the Sun goes out. Wolfe again:
It's been a long time, but I remember [von Braun] saying something like this: Here on Earth we live on a planet that is in orbit around the Sun. The Sun itself is a star that is on fire and will someday burn up, leaving our solar system uninhabitable. Therefore we must build a bridge to the stars, because as far as we know, we are the only sentient creatures in the entire universe. When do we start building that bridge to the stars? We begin as soon as we are able, and this is that time. We must not fail in this obligation we have to keep alive the only meaningful life we know of.
This sounds plausible at first, but the problem is that the time scales are all wrong. This is a good argument for a policy that would ramp us up interstellar travel on a time scale of a billion years or so, but it’s certainly not obvious that going to Mars now (on a time scale of 10 or 100 years) is the right strategy to reach that billion-year goal. On the contrary, we’re much more likely to reach that long-term goal by slowly developing as a species, improving our technology in a bunch of ways that we haven’t even thought of yet. It’s totally implausible that pouring a lot of resources into the specific narrow goal of sending a hunk of metal with a few humans in it to Mars is the most efficient way to further the needed slow, long-term development.
Rocket fuel from lunar water
For many years now, Bob Park has been writing What’s New, a highly opinionated and often sarcastic weekly column about issues relating physics to society and public policy. Here’s a bit from the latest:
WATER: IS H2O A FORMULA FOR ROCKET FUEL? Even as I write this, the Lunar Crater Observation and Sensing Satellite is getting itself aligned in a lunar polar orbit. Its job is to look into deep lunar craters for any sign of the frozen water that some are certain is hidden in the shadows. It would be very expensive water. I told NASA that I would be happy to leave my garden hose out and they could come by and take all the water they want. They weren't interested. They wanted water on the Moon, where they could use it to refuel rockets. “Is water rocket fuel,” I asked? “Hydrogen is a component of rocket fuel,” I was told. “You have to split the water.” “Doesn’t that take energy,” I asked? “Excuse me,” he said, “my cell phone is ringing.”
(As of this writing, the latest column is not up on the web site; it was sent by email on Friday.)
For those who’ve never seen the column, this is a pretty typical sample in both content and tone. If you like this sort of thing, then this is the sort of thing you’ll like.
Anyway, I’ve heard this sort of thing before: if we find water on the Moon, we can separate the hydrogen and oxygen to make rocket fuel. One would hope that any physics student — or indeed a decent middle-school science student — could see the problem here: to separate the molecules requires precisely as much energy as you get back when you burn them. So what are you getting out of this? That’s Park’s point.
I find it hard to believe that NASA would be stupid enough to make this elementary error (or perhaps I should say that they would be stupid in this particular way). And as much as I enjoy Park’s column, he does have his axes to grind, so you have to read him skeptically. I should add that most of the time, his biases are on the side of the (metaphorical, secular) angels: He seems to be primarily animated by hatred of ignorance, superstition, and pseudoscience.
Anyway, it turns out that when people talk about using water as rocket fuel, they’re envisioning a future in which we have good ways of producing energy, but those ways aren’t portable: think nuclear power plants or massive arrays of solar cells. If we had a nuclear plant on the Moon, we could do electrolysis and generate lots of hydrogen for use as a rocket fuel, which would be a lot more practical than putting the nuclear plant on the rocket itself.
That idea may or may not be realistic, but it isn’t crazy and stupid in the manner Park implies. (On the other hand, what he wrote is much funnier and pithier than what I just wrote. Life is full of tradeoffs.)
Self-promotion
Our local public radio station aired a short interview with me about an NSF grant I was awarded recently.
I don’t know whether the interviewer is aware of what I wrote about a previous piece he did; I hope not. Needless to say, I like the piece about me better.
What is a Kelvin?
I admit to a perverse fascination with metrology, specifically with the question of how fundamental units of measure are defined, so I enjoyed this article on the possibility of redefining the kelvin. (Among my other forms of geekiness, I also like knowing about grammar and usage trivia, so yes, it is “kelvin,” not “Kelvin”.)
In the old days, of course, the unit of temperature was defined by setting 0 and 100 celsius to be the freezing and boiling points of water. That’s far from ideal, since those temperatures (especially the boiling point) depend on atmospheric pressure. We can do much better by defining the temperature scale in terms of a different pair of temperatures, namely absolute zero and the temperature of the triple point of water. The triple point is the specific values of temperature and pressure such that liquid, gas, and solid phases coexist simultaneously. Since there’s only one pair (T,P) such that this happens, it gives a unique temperature value that we can use to pin down our temperature scale. That’s how things are done right now: the kelvin is defined such that the triple point is exactly 273.16 K, and of course absolute zero is exactly 0 K.
So what’s wrong with this? According to the article,
“It’s a slightly bonkers way to do it,” says de Podesta. According to the metrological code of ethics, it is bad form to grant special status to any single physical object, such as water.
This is certainly true in general. For instance, the kilogram is defined in terms of an object, a particular hunk of metal in Paris whose mass is defined to be exactly 1 kg. That’s clearly a bad thing: what if someone lost it? (Or, more likely, rubbed a tiny bit of it off when removing dust?)
But this doesn’t really make sense to me as a problem with the kelvin. Water isn’t an “object”; it’s a substance, or better yet a molecule. At the moment, the unit of time is defined in terms of a particular type of atom, namely cesium-133, and that definition is regarded as the gold standard to which others aspire. Why is cesium-133 OK, but H2O bad?
Although the objection-in-principle above doesn’t seem right, apparently there are some important pragmatic reasons:
‘Water’ needs to be qualified further: at present, it is defined as ‘Vienna standard mean ocean water’, a recipe that prescribes the fractions of hydrogen and oxygen isotopes to at least seven decimal places.
OK, I’ll admit that that’s a problem.
The proposed solution is to define the kelvin in relation to the joule, using the familiar relationship E = (3/2) kT for a monatomic ideal gas. This is fine, as long as you know Boltzmann’s constant k sufficiently accurately. Researchers quite a while ago found a clever way of doing this, and they’re so confident about it that they wrote in their paper
If by any chance our value is shown to be in error by more than 10 parts in 106, we are prepared to eat the apparatus.
This boast is all the more impressive when you consider that the apparatus contains a lethal quantity of mercury.
The method involves ultraprecise measurements of the speed of sound, which is proportional to the rms speed of atoms and hence gives the average energy per atom of the gas. The original work wasn’t precise enough to justify changing the definition of the kelvin, but the hope is that with improvements it will be.
Of course, then the kelvin will be defined in terms of the joule, which is itself defined in terms of the kilogram, which depends on that hunk of metal in Paris. People are working hard on finding better ways to define the kilogram, though, so we hope that that problem will go away.
Bunn of the North
As he did last year, my brother Andy is heading off to Siberia to study the climate of the Arctic, specifically
the transport and transformations of carbon and nutrients as they move with water from terrestrial uplands to the Arctic Ocean, a central issue as scientists struggle to understand the changing Arctic.
You should check out the web site and blog. Among other things, you can find out why he has to travel 11000 miles to end up a mere 3000 miles from home.
Risk-averse science funding?
Today’s New York Times has an article headlined “Grant system leads cancer researchers to play it safe,” discussing the thesis that, in competing for grant funds, high-risk, potentially transformative ideas lose out to low-risk ideas that will lead to at most incremental advances. A couple of comments:
Although the article focuses on cancer research, people talk about this problem in other branches of science too. When I served on a grant review panel for NSF not too long ago, we were explicitly advised to give special consideration to “transformative” research proposals. If I recall correctly, NSF has started tracking the success rate of such transformative proposals, with the goal of increasing their funding rate.
Personally, I think this is a legitimate concern, but it’s possible to make too much of it. In particular, in the fairy-tale version of science history that people (including scientists) like to tell, we tend to give much too much weight to the single, Earth-shattering experiment and to undervalue the “merely” incremental research. The latter is in fact most of science, and it’s really really important. It’s probably true that the funding system is weighted too much against high-risk proposals, but we shouldn’t forget the value of the low-risk “routine” stuff.
For instance, here’s how the Times article describes one of its main examples of low-risk incremental research:
Among the recent research grants awarded by the National Cancer Institute is one for a study asking whether people who are especially responsive to good-tasting food have the most difficulty staying on a diet.
Despite the Times’s scrupulous politeness, the tone of the article seems to be mocking this sort of research (and in fact this research in particular). And it’s easy to do: Expect John McCain to tweet about this proposal the next time he wants to sneer at the idea of funding science at all. But in fact this is potentially a useful sort of thing to study, which may lead to improvements in public health. Yes, the improvement will be incremental, but when you put lots of increments together, you get something called progress.
Do statistics reveal election fraud in Iran?
Some folks had the nice idea of looking at the data from the Iran election returns for signs of election fraud. In particular, they look at the last and second-to-last digits of the totals for different candidates in different districts, to see if these data are uniformly distributed, as you’d expect. Regarding the last digit, they conclude
The ministry provided data for 29 provinces, and we examined the number of votes each of the four main candidates — Ahmadinejad, Mousavi, Karroubi and Mohsen Rezai — is reported to have received in each of the provinces — a total of 116 numbers.
The numbers look suspicious. We find too many 7s and not enough 5s in the last digit. We expect each digit (0, 1, 2, and so on) to appear at the end of 10 percent of the vote counts. But in Iran’s provincial results, the digit 7 appears 17 percent of the time, and only 4 percent of the results end in the number 5. Two such departures from the average — a spike of 17 percent or more in one digit and a drop to 4 percent or less in another — are extremely unlikely. Fewer than four in a hundred non-fraudulent elections would produce such numbers.
The calculations are correct. There’s about a 20% chance of getting a downward fluctuation as large as the one seen, about a 10% chance of getting an upward fluctuation as large as the one seen, and about a 3.5% chance of getting both simultaneously.
The authors then go on to consider patterns in the last two digits.
Psychologists have also found that humans have trouble generating non-adjacent digits (such as 64 or 17, as opposed to 23) as frequently as one would expect in a sequence of random numbers. To check for deviations of this type, we examined the pairs of last and second-to-last digits in Iran’s vote counts. On average, if the results had not been manipulated, 70 percent of these pairs should consist of distinct, non-adjacent digits.
Not so in the data from Iran: Only 62 percent of the pairs contain non-adjacent digits. This may not sound so different from 70 percent, but the probability that a fair election would produce a difference this large is less than 4.2 percent.
Each of these tests alone is of marginal statistical significance, it seems to me, but in combination they start to look significant. But I don’t think that’s a fair conclusion to draw. It seems to me that this analysis is an example of the classic error of a posteriori statistical significance. (This fallacy must have a catchy name, but I can’t come up with it now. If you know it, please tell me.)
This error goes like this: you notice a surprising pattern in your data, and then you calculate how unlikely that particular pattern is to have arisen. When that probability is low, you conclude that there’s something funny going on. The problem is that there are many different ways in which your data could look funny, and the probability that one of them will occur is much larger than the probability that a particular one of them will occur. In fact, in a large data set, you’re pretty much guaranteed to find some sort of anomaly that, taken in isolation, looks extremely unlikely.
In this case, there are lots of things that one could have calculated instead of the probabilities for these particular outcomes. For instance, we could have looked at the number of times the last two digits were identical, or the number of times they differed by two, three, or any given number. Odds are that at least one of those would have looked surprising, even if there’s nothing funny going on.
By the way, this is an issue that I’m worrying a lot about these days in a completely different context. There are a number of claims of anomalous patterns in observations of the cosmic microwave background radiation. It’s of great interest to know whether these anomalies are real, but any attempt to quantify their statistical significance runs into exactly the same problem.
Solar sailing
I just got back from vacation, which included some long plane trips that gave me a chance to catch up on my magazine reading. So just a couple of months late, I read this article in the Atlantic on the Planetary Society’s attempts to get funding to build a prototype solar sailing spacecraft. For those who don’t know, the idea is to propel the ship using big sails that reflect sunlight. Since photons carry momentum, all of those photons bouncing off of the sail will impart momentum, making the craft go.
It’s a pretty good article, but there’s one bit of it that baffles me:
Not everyone concedes even the basics. The late Thomas Gold, of Cornell's Center for Radiophysics and Space Research, had insisted that solar sailing would never work, for the same reasons you cannot have a perpetual-motion machine: Carnot's rule and the iron second law of thermodynamics. No machine can extract an unlimited supply of free energy from any source; a certain "degradation" has to occur. And the problem is even more fundamental than that, Gold argued: the beautiful Mylar blades of Cosmos 1, or 2, will be too splendid to function, period. With "a perfect mirror, the two temperatures"€”of the sails and the sun€”"will be the same," Gold reasoned. "And it follows that the mirror cannot act as a heat engine at all: no free energy can be obtained from the light."
Any time you read a description of a technical argument in a nontechnical article, you have to reverse-engineer the details of the argument from the general description. I can’t do that here: I can’t imagine any way that the argument imputed to Gold could make sense. First, a “perfect mirror” is precisely the sort of thing that will not reach thermal equilibrium with the Sun, since it never absorbs thermal energy from the sunlight. Second, even if you don’t have a perfect mirror, it’s not true that such a system will eventually reach the same temperature as the Sun. For instance, the Earth has been absorbing 6000-degree sunlight for 4 billion years and is still at a relatively comfortable 300 K. Something similar applies to the solar sail. The point is that the Earth-Sun system is not a closed system: both are constantly radiating energy into the much colder deep-space environment. The entire closed system (i.e., the Universe) is very gradually tending toward thermal equilibrium, but the idea that one small part of it (the Sun and the solar sail) will themselves reach equilibrium independent of the rest of the universe is nonsense.
If there’s a serious argument lurking in there, I’d love to know what it is.