Apologies to Steven Morris

My little piece on evolution and the second law of thermodynamics appeared in the most recent issue of the American Journal of Physics.  (Non-paywall version here.)  I wasn’t going to note that on the blog, since I’ve already written plenty about it before, but there’s one citation I wanted to include that didn’t make it in, so I figured I’d at least point it out here.

After the article had been accepted, Steven Morris pointed me to a piece he wrote back in 2005 for Reports of the National Center for Science Education that, like mine, quantitatively compares the entropy increase supplied by sunlight with the decrease required for evolution.  I was going to add a mention of this to my article when I got the proofs, but apparently AJP doesn’t do proofs for short notes like this, so I missed my chance. I figured I’d at least mention Morris’s piece here as a tiny mea culpa. 

To atone a bit further, I’m going to go send some money off to the NCSE.  This is an organization that fights for the teaching of evolution in US schools.  I used to give them money but haven’t for a while.  You should support them too.

The cost of SETI

I think that things like SETI (the search for extraterrestrial intelligence) are extremely unlikely to find a signal: even if intelligent life is out there, it’s not at all clear that such beings would spend much of their time communicating with each other by sending radio signals that leak off into space at detectable levels.  Even if they do that for a while, they’ll probably quickly learn ways of communicating that are less wasteful and harder to eavesdrop on.  In other words, whatever the other numbers in the Drake equation are like, L is probably quite small.

Still, I’ve generally had positive, warm fuzzy feelings about SETI.  Even if the odds are terrible, I figured, the payoff is huge, and the cost is low, so why not go ahead?  The first part is certainly true: if SETI saw a signal, it would be about the most important discovery ever made.  But my friend Allen Downey (CS professor at Olin College) recently gave me a convincing argument that the costs are higher than I’d realized, and I think I’ve changed my mind and become anti-SETI as a result.

Here’s my summary of Allen’s argument.

A key part of SETI  is combing through vast amounts of data from radio telescopes, looking for signals that look like those of extraterrestrial intelligence.  This is a big computational project, and the SETI people have adopted a clever way to achieve it: they farm it out to huge numbers of supporters, who run the computations on their own computers during times when those computers would otherwise be idle.

The people who do this, of course, are paying a cost: they’re giving away free CPU cycles on their computers.  Aside from any other costs, this costs them money, because a computer that’s actually computing uses more power than one that’s idle.  I think that a decent estimate of that power difference is about 40 watts.  (You can find a bunch of estimates out there for this quantity.  There’s some variation, but this seems to be about right.)  Say a typical user has SETI@home running on one PC about 2/3 of the time for a month.  How much does it cost? To find out, multiply 40 watts times 20 days and convert the result into kilowatt-hours.  These days, 1 kWh costs about 12 cents, so multiply the result by $0.12.

If that’s starting to sound like work, it’s not.  Just ask Google.  (In case you didn’t know, Google’s also a calculator, and it knows a ton of unit conversions.)  The answer is that such a user is spending about $2.30 a month.

I’d bet that the typical SETI@home volunteer doesn’t know that it’s costing them that much, but that if they did they’d probably think it sounded like a reasonable cost.

So far, I probably haven’t convinced you that SETI costs too much.  But now let’s think globally.  In total, the project has used up 2 million years of computing time.  If you try the same calculation to get a total cost, you get  $84 million. I don’t know about you, but to me, that’s real money.  When you think about the things that could be done with $84 million, it’s hard to see that this incredible longshot is justified, in my opinion.

A few random notes:

  1. Nobody’s actually paying this money directly, so it’s not noticeable. But just because the costs are hidden, that doesn’t mean they’re not real.
  2. In fact, the cost is probably underestimated in a bunch of ways.  I was assuming that the computers would all be turned on anyway, and just considering the difference in power due to the extra computational load.  If people are leaving their computers on for SETI@home when they would otherwise be turning them off, then the cost is greater.  Also, the figure of 12 cents/kWh is the direct cost to the consumer, but there are externalities (greenhouse gas emissions, pollution, geopolitical problems due to resource competition) that that market price doesn’t include.
  3. Instead of considering the cost savings, you could consider the other good things that people could be doing with all those CPU cycles.
  4. Let’s come back for a minute to my original point about L in the Drake equation.  My reason for thinking it was small was simply that any communication method that sends radio power into space is wasteful.  A technologically advanced species will learn how to beam its communications directly to the intended recipient.  Allen pointed out a different reason, which I’d never thought of.  To prevent eavesdropping, you want to encrypt your communications.  The better a method of encryption is, the more the output looks just like noise.  If an advanced society is really good at encryption, we won’t notice its signals even if they’re out there, because they’ll look like noise.  I’m not sure I’m convinced by this, but it’s an interesting point.

If the Sun turned into a black hole

Some time back in the’90’s I wrote a document explaining some things about black holes.  To my amazement, people still read it, and they occasionally send me questions as a result.  I’m happy to answer these when I can, and as long as I’m answering them anyway, I might as well post them here.

The latest is from Chris Warring:

My friend and I are having a debate over the question “If the Sun turned into a black hole, what would happen to the Earth’s orbit?”

I quoted from your article http://cosmology.berkeley.edu/Education/BHfaq.html  “What if the Sun *did* become a black hole for some reason? The Earth and the other planets would not get sucked into the black hole; they would keep on orbiting in exactly the same paths they follow right now….a black hole’s gravity is no stronger than that of any other object of the same mass.”

My friend argued that since astroids impact the Sun then they would also impact the black hole.  This would eventually increase the mass, increase the gravitational pull on the Earth, and place the Earth on a decaying orbit.

I have since read a little on Hawking Radiation, and that black holes evaporate.  I now wonder if the black hole that was our Sun would evaporate, losing gravitational effects on the Earth, and the Earth would end up drifting away from where our Sun use to be.

Here’s my answer:

First, let me say that all of the effects you mention are very small. They would alter the Earth’s orbit a little bit over very long times. When I wrote what I did about the Earth’s orbit, I wasn’t considering such tiny effects. But they’re fun to think about, so here goes.

It is true that, if the mass of the Sun (or black hole, whichever is at the center of the Solar System) goes up, then the Earth’s orbit will be affected. Specifically, it would move to a smaller orbit. And of course the reverse is true if the mass goes down.

First, let’s talk about what’s happening right now, and then consider what happens if the Sun turned into a black hole. Right now, things do crash into the Sun from time to time, increasing the mass of the Sun. On the other hand, there’s constant evaporation from the Sun’s atmosphere (as well as energy escaping in the form of sunlight, which translates into a mass loss via E = mc2). I’m pretty sure that the net effect right
now is that the Sun is gradually losing mass. Taken in isolation, this mass change would cause the Earth to drift gradually into a larger orbit.

That phrase “Taken in isolation” is important. There are other things that affect Earth’s orbit much more than this tiny mass loss rate. The main one is gravitational tugs from other planets, especially Jupiter. I
guess it must be true that the gradual mass loss of the Sun gradually makes all of the planets drift further out, although the details might be complicated.

There’s also the fact that the Earth is being bombarded by meteors. Those presumably slow the Earth down in its orbit. Taken in isolation, that effect would make the Earth spiral in towards the Sun.

I’ve never tried to work out the size of any of these effects. A lot is known about the effects of other planets’ gravitation on our orbit (the buzzword for this being Milankovich cycles). The other effects are much smaller.

Now, what would happen if the Sun became a black hole? Things like meteors would still get absorbed from time to time, but much less often than they do now. That may go against intuition, because we think of black holes as really good at sucking things in, but in fact the black hole has the same gravitational pull as the Sun on objects far away, and it’s a much smaller target, so fewer things actually hit it. So the rate
of mass increase due to stuff falling in will be less than it is now. On the other hand, stuff won’t be evaporating nearly as fast as it does now. (There would be Hawking radiation, but that’s incredibly small, much less than the rate at which atoms are boiling off the Sun now.) So the net effect would certainly be that the black hole would gradually go up in mass, whereas the Sun gradually goes down. The net result would be that the Earth would gradually get closer to the black hole.

But again, the key word is “gradually”: these are really really tiny effects. I’d bet that they’d be too small to have any noticeable effect even over the age of the Universe.

xkcd

I’ve linked to the webcomic xkcd.com in passing before, but it’s so full of science-geeky awesomeness that I thought I’d advertise it more explicitly, in case there’s anyone out there who would like it but doesn’t know about it.  Here are a couple of examples; if you like them, go there and waste an hour or two clicking “Random.”

Oh, and be sure to mouse over each comic.  There’s some extra text for each one, and often it’s the best part.  (The ones embedded in this post don’t have the extra bits; go to the web site to see them.)

brontosaurus.png

beliefs.jpg

And the geekiest for last:

kepler.jpg

Will we find extraterrestrial life?

My friend Tim asked me this question:

What do you think are the chances that we’ll detect (not necessarily physically encounter, but detect) life on another planet by the end of the century?

I think the odds are quite good, actually.

First, here’s something that I’m pretty confident is true: Within a few decades, we will have figured out how to measure the chemical composition of the atmospheres of other planets.  We’re moving fast in that direction right now, and while it’s a hard technical problem, I don’t see any show-stopping reasons why we can’t do it.  Basically, you have to have telescopes with sharp enough resolution to see the planet separately from its star, and then you just do spectroscopy.

I’ll be very surprised if we haven’t done this to hundreds and hundreds of planets within the next few decades.  We’ll know what molecules are in the atmospheres of those planets.  That means that we’ll detect life if a couple of conditions are satisfied:

  1. Extraterrestrial life is not very rare.
  2. Extraterrestrial life leaves identifiable chemical signatures in the atmospheres of host planets.

That’s as much as I can say with confidence.  From here on it’s guesswork.  Regarding #2, one important question is what would count as an identifiable signature.  People will naturally look at first for the chemicals that we find in our own atmosphere but that would not be there if there weren’t life.  I think that plain old oxygen (O2) is one of the main ones here: the oxygen would all be in other forms such as CO2 if it weren’t constantly replenished by biological processes.  I have no idea whether extraterrestrial life will be based on similar chemistry to ours, so maybe O2 won’t be the signature we’ll see.  But it does seem likely to me that, if a planet has life on it, there’ll be molecules in its atmosphere that you wouldn’t expect to see in a dead planet, and once we get good at doing spectroscopy, we’ll find them if they’re there.  So I’m not too worried about #2.

#1 is the one nobody knows about.  Is extraterrestrial life found on lots of planets, or is it a one-in-a-trillion shot?  Here, you just have to make your best guess.  Personally, I don’t think it’s likely to be incredibly rare, so once we’re mass-producing spectroscopy of other planets, we’ve got a good shot at finding it.  But that claim is based on no data — it’s a Bayesian prior probability — so feel free to disbelieve me.

I think this life is far more likely to be simple microbes than big intelligent things.  I doubt we’ll be hearing messages from ET any time soon.  That doesn’t mean that I think searches for intelligent life like SETI are a bad idea, though: they’re quite cheap compared to lots of scientific research, and the payoff if they succeed is so huge that I think it’s worth throwing a little bit of resources their way, despite the long odds.

Bragging

My colleague Jerry Gilfoyle and I were just awarded an NSF grant to buy a new computing cluster.  In the past, my students and I have mostly worked on problems that could be attacked with ordinary desktop computers.  This grant means that we’ll be able to go after more computationally intensive problems.  It also means I’ll have to learn about supercomputing techniques.  Fortunately, Jerry’s very experienced at this.

This has been a good funding year for me: I submitted three NSF proposals, and all three were funded. That’s at least partly due to the federal stimulus bill: only one of the three is officially stimulus money, but no doubt all the stimulus money washing around freed up more non-stimulus money for other grants.

Correction: Actually, two out of the three, including the computing cluster, are stimulus funds.  I’m nothing if not shovel-ready.

Whither human space flight?

The Augustine panel, the blue-ribbon (whatever that means) commission charged with assessing options for the future of human space exploration in the US, released the executive summary of its report today.  (The full report is coming soon.) So what did it say?  Here’s what the first two hits on Google News say.

First sentences of the Washington Post piece:

Don’t try to put astronauts on Mars yet — too hard, too costly. Go to the moon — maybe.

Headline of the Reuters piece:

NASA strategy proposal aims for Mars over moon.

So who’s right?  The Post is closer than Reuters. The summary lays out three possible strategies, which they call “Mars first,” “Moon first,” and “flexible path.”  The last one involves starting with “inner system locations such as lunar orbit, Lagrange points, near-Earth objects and the moons of Mars.” After describing all three options, “the Committee finds that both Moon First and Flexible Path are viable exploration strategies.” (I’ve argued before that the Flexible Path combines all the disadvantages of human spaceflight with none of the advantages.)

But the real takeaway from the summary, it seems to me, is not these three paths; it’s this:

Human exploration beyond low-Earth orbit is not viable onder the FY 2010 budget guideline.

That is, we either need to spend substantially more money or decide that human space exploration isn’t a priority at the moment.  This sounds right to me.

A few miscellaneous observations:

1. Early on, the summary says something that should be obvious but often seems not to be:

Planning for a human spaceflight program should begin with a choice about its goals — rather than a choice of possible destinations.

I’m really glad to hear them say this: I’m sick of people saying “we need to go to Mars” without saying why.  I don’t think that the rest of the summary always lives up to this laudable goal, but maybe I’m not being fair to the panel: the full report may do better.

2. The summary recommends keeping the international space station going for another five years (to 2020 instead of 2015 as currently scheduled).

It seems unwise to de-orbit the Station after 25 years of assembly and only five years of operational life.

This is true, if the Station is actually useful, a claim that has not been demonstrated to my satisfaction.  Otherwise, the 25 years is a sunk cost, and extending the operational life is throwing good money after bad.

3. The panel has good things to say about the idea of contracting out some of our space flight to the commercial sector:

As we move from the complex, reusable Shuttle back to a simpler, smaller capsule, it is an appropriate time to consider turning this transport service over to the commercial sector.

In the 1920s, the federal government awarded a series of guaranteed contracts for carrying airmail, stimulating the growth of the airline industry.  The Committee concludes that an architecture for exploration employing a similar policy of guaranteed contracts has the potential to stimulate a vigorous and competitive commercial space industry.

This would have the benefit of focusing NASA on a more challenging role, permitting it to concentrate its efforts where its inherent capability resides: for example, developing cutting-edge technologies and concepts, and defining programs and overseeing the development and operation of exploration systems, particularly those beyond low-Earth orbit.

I think this approach is well worth considering.

Want to be a lawyer? Study physics.

Well, maybe.  I learned via Sean Carroll about a study showing that physics and math majors get better LSAT scores than people who study any other subject.  The top 5 disciplines, with mean LSAT scores:

  1. Physics/Math (160.0)
  2. Economics (157.4)
  3. Philosophy/Theology (157.4)
  4. International Relations (156.5)
  5. Engineering (156.2)

In some cases, disciplines with smaller numbers of students were lumped together, so Physics/Math were treated as one category.  Pre-law ranked 28th out of 29, with an average score of 148.3.

It’s tempting for us physicists to use this as propaganda to convince people that studying physics is good preparation for a variety of careers, including law.  Although I suspect that that proposition is true, this study probably doesn’t provide strong evidence for it, for the usual correlation-is-not-causation reasons.  Students may self-select into physics and math based on qualities that correlate with doing well on the LSAT, but that doesn’t mean that a given student would do better on the LSAT if she studied physics as opposed to something else.

Still, it’s always nice to have bragging rights over other disciplines.

The fact that pre-law ranks near the bottom sounds embarrassing, but I suspect there’s not too much significance to it.  Pre-law is a funny category: at many institutions (including, I think, every one I’ve ever taught at or attended), pre-law isn’t a major: a pre-law student majors in something else.  So I’ll speculate that the students counted as pre-law in this study are a non-representative sample: they come from a different (and plausibly biased in various ways) subset of universities than the others.

One last thing.  Sean observes

The obvious explanation: physics and math students get to be really good at taking tests like the LSAT. I don't imagine this correlates very strongly with "being a good lawyer." Then again, I don't think that good scores on the physics GRE correlate very strongly with "being a good physicist," over and above a certain useful aptitude at being quick-minded.

Regarding the physics GRE, I seem to recall some actual data showing that scores correlate extremely poorly with a variety of measures of success in and after graduate school, but I can’t seem to find it, so maybe I hallucinated it.  If anyone remembers what I’m thinking of and can point me to a citation, I’d appreciate it.