Paris by bike

One nice thing about academic life is that you get to go on sabbatical from time to time. The trick is to play your cards right and make sure you have collaborators in nice places when the time comes.  I managed to do this pretty well, which is why I’m spending almost three months in Paris.

Over the past couple of weeks, I’ve been getting around the city mostly by bike.  Paris has a clever bike-rental system called Vélib.  You pay a small fee (5 euros for a week or 30 euros for a year) that gives you access to the bikes in the system.  There are an incredibly large number pickup and dropoff sites: in most of the city, you’re never more than a couple of blocks from one. You can take out a bike from any of the sites and drop it off at any of the others.  If you have it out for less than half an hour (which gets you pretty far), there’s no additional charge beyond your subscription fee.

There are Vélib stations right outside my apartment and my office:

Velib outside my apartment

img_0451.JPG

It’s a great idea.  I hope more cities adopt things like this.  It’s way more fun than getting around on the Metro (and I say this as someone who kind of likes riding on the Metro).

Americans I’ve told about this ask me whether I’m scared of biking in the Paris traffic.  The answer is a definite no.  I haven’t done much urban biking since the mid-90’s, but I used to do it a lot then, when I was a grad student in Berkeley.  I don’t find biking in Paris to be significantly more dangerous or stressful than biking in Berkeley was.  Sure, you’ve got to pay attention, but in a lot of ways it’s a very bike-friendly city:

  1. There are lots of bike lanes.
  2. At least for the routes I’ve been traveling, I can arrange things so that, most of the time, I’m not biking past parallel-parked cars.  That’s really important: I think that car doors opening suddenly in front of you has got to be the biggest hazard of urban biking.  Most of the bike accidents I knew of when I lived in Berkeley were in this category.
  3. There are tons of other cyclists, which means that drivers are more aware of the existence of bikes than in US cities.  A colleague of mine says that this is largely due to the Vélib program: when it first started, drivers weren’t as used to bikers as they are now.

So if you’re spending time in Paris, don’t be scared — try it out!  (Litigiousness paranoia disclaimer: You ride at your own risk.  If you take my advice and get into an accident, it’s not my fault — don’t sue me!)

There are certainly a few problems with the Vélib system:

  1. It’s actually not easy to do it as an American.  To subscribe to the system, you either have to have a French bank account or a European-style credit card with a chip in it.  Rumor has it that American Express cards work, but I can’t confirm this.  I eventually had to get a colleague who lives here to launder the transaction.
  2. The pickup and dropoff spots are all automated, of course.  They have a fixed number of spots, and if one is full, you can’t drop off your bike there.  (And of course, if it’s empty, you can’t pick up a bike there.)  But the kiosk at the station will show you a map of nearby stations that do have space / bikes.  You usually don’t have to go far. And if you’re getting near the end of your half-hour, and you come to a station that’s full, it’ll give you free extra time to get to another station.
  3. As you can imagine, maintenance of the bikes is tricky.  Sometimes, you’ll get one out that has a problem (you can see a clear example in the top picture above). Savvy riders check out the bikes before taking one out, but you can always miss something.  For instance, the one I took to work this morning won’t stay in third gear unless you hang onto the gearshift — I couldn’t have found that out before selecting it.  If there is a problem, you can just check it back in and get another.  One thing the system seems to be missing: as far as I can tell, there’s no way for the user to flag a bike as having some sort of maintenance problem.  I’d think they’d want to implement that.

Martin Gardner et al.

John Tierney writes about Martin Gardner, the great mathematical-puzzle writer.  I went through a huge Martin Gardner phase in my misspent youth, as I suspect did many other scientists and mathematicians.

Gardner’s best known for his Mathematical Games column in Scientific American.  When he stopped writing it in the early 1980s, the slot was taken over by Douglas Hofstadter.  I loved Hofstadter’s Gödel, Escher, Bach (again, probably lots of scientists, especially those about my age, would say the same), but I don’t remember liking his column at all.

As long as I’m free-associating here, there’s one more author of puzzle books that I remember loving when I was a kid: Raymond Smullyan.  He’s an actual academic mathematician (unlike Gardner), but I know him only as the writer of logic puzzles.  See, for example, the Hardest Logic Puzzle Ever.  For those who went through a Smullyanesque logic puzzle phase and remember some of the tricks, this puzzle is hard but doable.  If you didn’t, then yes, it’s probably extremely hard.

Planck launch pictures

Ken Ganga, a member of the Planck satellite collaboration, has some nice pictures of the launch on his blog.  (Not exactly breaking news, but I just found out about these pictures.)  There’s also a story about a potentially fatal problem with the satellite that was caught just barely before launch.

By the way, in addition to his Planck blog, Ken has a personal blog, mostly about funny things he’s found while living as an American expatriate in Paris.

Is the LHC doomed by signals from the future?

I guess this piece in the NY Times has been getting some attention lately.  It’s about a crazy theory by Nelson and Ninomiya (NN for short) in which the laws of physics don’t “want” the Higgs boson to be created.  According to this theory, states of the Universe in which lots of Higgses are created are automatically disfavored: if there are multiple different ways something can turn out, and one involves creating Higgses, then it’ll turn out some other way.  Since the Large Hadron Collider is going to attempt to find the Higgs, this theory predicts that things will happen to it so that it fails to do so.

Sean Carroll has a nice exegesis of this.  I urge you to go read it if you’re curious about this business.  There’s a bit in the middle that explains the theory in a bit more detail than you might like (unless of course you like that sort of thing).  If you find yourself getting bogged down when he talks about “imaginary action” and the like, just skip ahead a few paragraphs to about here:

So this model makes a strong prediction: we're not going to be producing any Higgs bosons. Not because the ordinary dynamical equations of physics prevent it (e.g., because the Higgs is just too massive), but because the specific trajectory on which the universe finds itself is one in which no Higgses are made.

That, of course, runs into the problem that we have every intention of making Higgs bosons, for example at the LHC. Aha, say NN, but notice that we haven't yet! The Superconducting Supercollider, which could have found the Higgs long ago, was canceled by Congress. And in their December 2007 paper €” before the LHC tried to turn on €” they very explicitly say that a "natural" accident will come along and break the LHC if we try to turn it on. Well, we know how that turned out.

I think Sean’s overall point of view is pretty much right:

At the end of the day: this theory is crazy. There's no real reason to believe in an imaginary component to the action with dramatic apparently-nonlocal effects, and even if there were, the specific choice of action contemplated by NN seems rather contrived. But I'm happy to argue that it's the good kind of crazy. The authors start with a speculative but well-defined idea, and carry it through to its logical conclusions. That's what scientists are supposed to do. I think that the Bayesian prior probability on their model being right is less than one in a million, so I'm not going to take its predictions very seriously. But the process by which they work those predictions out has been perfectly scientific.

Because I’m obsessed with Bayesian probabilities, I want to pick up a bit on that aspect of things.  NN propose an experiment to test their theory.  We take a million-card deck of cards, in which one says “Don’t turn on the LHC.”  We pick a card at random from the deck, and if we get that one card, we junk the LHC.  Otherwise, we go ahead and search for the Higgs as planned.  According to NN, if their theory is right, that card will come up because the Universe will want to “protect itself” from Higgses.

I don’t think I buy this, though.   I don’t think there’s any circumstance in which this proposed experiment will provide a good test of NN’s theory.  To see why, we have to dig into the probabilities a bit.

Suppose that the Bayesian prior probability of NN’s theory being true (that is, our estimate of the probability before doing any tests) is p(NN).  As Sean notes, p(NN) is a small number.  Also, let p(SE) be the probability that Something Else (a huge fire, an earthquake, whatever) destroys the LHC before it finds the Higgs.  Finally, let p(C) be the probability that we draw the bad card when we try the experiment.  We get to choose p(C), of course, simply by choosing the number of cards in the deck.  So how small should we make it?  There are two constraints:

  1. We have to choose  p(C) to be larger than p(SE).  Otherwise, presumably, even if NN’s theory is true, the Universe is likely to save itself from the Higgs simply by causing the fire, so the card experiment is unlikely to tell us anything.
  2. We have to choose p(C) to besmaller than p(NN).  The idea here is that if p(C) is too large, then our level of surprise when we pick that one card isn’t great enough to overcome our initial skepticism.  That is, we still wouldn’t believe NN’s theory even after picking the card.  Intuitively, I hope it makes sense that there must be such a constraint — if we did the experiment with 10 cards, it wouldn’t convince anyone!  The fact that the constraint is that p(C)<p(NN) comes from a little calculation using Bayes’s theorem.  Pester me if you want details.

In order for it to be possible to design an experiment that meets both of these constraints, we need p(SE)<p(NN).  That is, we need to believe, right now, that NN’s crazy theory is more likely than the union of all of the possible things that could go catastrophically wrong at the LHC. Personally, I think that’s extremely far from being the case,which means that NN’s proposed test of their theory is impossible even in principle.

(Constraint 1 already makes the experiment impossible in practice: it says that we have to take a risk with the LHC that is greater than all the other risks.  Good luck getting the hundreds of people whose careers are riding on the LHC to assume such a  risk.)

New paper out on the arXiv

My colleagues and I just submitted a paper about some of the technical issues associated with QUBIC, the new bolometric interferometer we’re trying to build for measuring the polarization of the microwave background.  In case you care, QUBIC stands for Q/U Bolometric Interferometer for Cosmology (and Q and U are the symbols for the two Stokes parameters that characterize linear polarization).  QUBIC is the merger of MBI (from the US)  and BRAIN (from Europe).  It’s what I’m here in Paris working on.

The paper addresses one concern that many people, both within the collaboration and outside, have worried about.  Traditionally, interferometers are narrow-band instruments — that is, they look at radiation within just a narrow range of wavelengths.  There’s a good reason for that: the whole idea of interferometry is to produce interference patterns out of the waves, and interference patterns get washed out when you have a wide range of wavelengths.  The instrument we’re proposing to build is a broadband interferometer, so there is naturally some worry about how or whether it’ll work.  We’ve made some general arguments before trying to quantify how much of a problem this’ll be.  This paper goes beyond those general arguments to lay out a detailed calculation showing that bandwidth issues don’t degrade the performance of the instrument too badly.

Even more than a lot of academic papers, this one is really aimed at specialists.  If you’re not interested in the details of CMB interferometry, it’s not for you.

If you loved writing and defending your Ph.D. dissertation,

then get an academic job in France.  They make you write and defend another thesis, after the Ph.D.  It’s called the habilitation à diriger des recherches (HDR), which I guess means “qualification to direct research.” As I understand it (i.e., not very well), you need to get it before you’re allowed to supervise Ph.D. students.  My colleague here in Paris just had his today.  He’s actually supervised Ph.D. students before, so it must be possible to get around that requirement, but I guess you need to get this certification to climb the academic ladder.

At first this sounded kind of cruel to me, but actually, I’d gladly have signed up to write another dissertation rather than go through the tenure process at U.R.

They Might Be Giants is interfering with my teaching

From time to time in the past, I’ve given my intro astronomy class the following assignment: Listen to Why Does the Sun Shine, by They Might Be Giants, and critique it for accuracy.  The answer is that it’s mostly very accurate, but there are a couple of things it gets wrong.

I’ll have to be careful assigning this in the future, though, because TMBG have written a followup song, Why Does the Sun Really Shine? (The Sun is a Miasma of Incandescent Plasma) that gives away part of the answer.

I learned about this via the radio show/podast Radio Lab.  One of their recent podcasts was all about TMBG and their new album of science songs for kids. (The part about the Sun starts at about 12:15, but it won’t kill you to listen to the whole thing.)

(In case any of my future astronomy students are reading this, the correction song reveals only one of the two things wrong with the original song; the Radio Lab interview reveals the other.)

Apologies to Steven Morris

My little piece on evolution and the second law of thermodynamics appeared in the most recent issue of the American Journal of Physics.  (Non-paywall version here.)  I wasn’t going to note that on the blog, since I’ve already written plenty about it before, but there’s one citation I wanted to include that didn’t make it in, so I figured I’d at least point it out here.

After the article had been accepted, Steven Morris pointed me to a piece he wrote back in 2005 for Reports of the National Center for Science Education that, like mine, quantitatively compares the entropy increase supplied by sunlight with the decrease required for evolution.  I was going to add a mention of this to my article when I got the proofs, but apparently AJP doesn’t do proofs for short notes like this, so I missed my chance. I figured I’d at least mention Morris’s piece here as a tiny mea culpa. 

To atone a bit further, I’m going to go send some money off to the NCSE.  This is an organization that fights for the teaching of evolution in US schools.  I used to give them money but haven’t for a while.  You should support them too.

The cost of SETI

I think that things like SETI (the search for extraterrestrial intelligence) are extremely unlikely to find a signal: even if intelligent life is out there, it’s not at all clear that such beings would spend much of their time communicating with each other by sending radio signals that leak off into space at detectable levels.  Even if they do that for a while, they’ll probably quickly learn ways of communicating that are less wasteful and harder to eavesdrop on.  In other words, whatever the other numbers in the Drake equation are like, L is probably quite small.

Still, I’ve generally had positive, warm fuzzy feelings about SETI.  Even if the odds are terrible, I figured, the payoff is huge, and the cost is low, so why not go ahead?  The first part is certainly true: if SETI saw a signal, it would be about the most important discovery ever made.  But my friend Allen Downey (CS professor at Olin College) recently gave me a convincing argument that the costs are higher than I’d realized, and I think I’ve changed my mind and become anti-SETI as a result.

Here’s my summary of Allen’s argument.

A key part of SETI  is combing through vast amounts of data from radio telescopes, looking for signals that look like those of extraterrestrial intelligence.  This is a big computational project, and the SETI people have adopted a clever way to achieve it: they farm it out to huge numbers of supporters, who run the computations on their own computers during times when those computers would otherwise be idle.

The people who do this, of course, are paying a cost: they’re giving away free CPU cycles on their computers.  Aside from any other costs, this costs them money, because a computer that’s actually computing uses more power than one that’s idle.  I think that a decent estimate of that power difference is about 40 watts.  (You can find a bunch of estimates out there for this quantity.  There’s some variation, but this seems to be about right.)  Say a typical user has SETI@home running on one PC about 2/3 of the time for a month.  How much does it cost? To find out, multiply 40 watts times 20 days and convert the result into kilowatt-hours.  These days, 1 kWh costs about 12 cents, so multiply the result by $0.12.

If that’s starting to sound like work, it’s not.  Just ask Google.  (In case you didn’t know, Google’s also a calculator, and it knows a ton of unit conversions.)  The answer is that such a user is spending about $2.30 a month.

I’d bet that the typical SETI@home volunteer doesn’t know that it’s costing them that much, but that if they did they’d probably think it sounded like a reasonable cost.

So far, I probably haven’t convinced you that SETI costs too much.  But now let’s think globally.  In total, the project has used up 2 million years of computing time.  If you try the same calculation to get a total cost, you get  $84 million. I don’t know about you, but to me, that’s real money.  When you think about the things that could be done with $84 million, it’s hard to see that this incredible longshot is justified, in my opinion.

A few random notes:

  1. Nobody’s actually paying this money directly, so it’s not noticeable. But just because the costs are hidden, that doesn’t mean they’re not real.
  2. In fact, the cost is probably underestimated in a bunch of ways.  I was assuming that the computers would all be turned on anyway, and just considering the difference in power due to the extra computational load.  If people are leaving their computers on for SETI@home when they would otherwise be turning them off, then the cost is greater.  Also, the figure of 12 cents/kWh is the direct cost to the consumer, but there are externalities (greenhouse gas emissions, pollution, geopolitical problems due to resource competition) that that market price doesn’t include.
  3. Instead of considering the cost savings, you could consider the other good things that people could be doing with all those CPU cycles.
  4. Let’s come back for a minute to my original point about L in the Drake equation.  My reason for thinking it was small was simply that any communication method that sends radio power into space is wasteful.  A technologically advanced species will learn how to beam its communications directly to the intended recipient.  Allen pointed out a different reason, which I’d never thought of.  To prevent eavesdropping, you want to encrypt your communications.  The better a method of encryption is, the more the output looks just like noise.  If an advanced society is really good at encryption, we won’t notice its signals even if they’re out there, because they’ll look like noise.  I’m not sure I’m convinced by this, but it’s an interesting point.

If the Sun turned into a black hole

Some time back in the’90’s I wrote a document explaining some things about black holes.  To my amazement, people still read it, and they occasionally send me questions as a result.  I’m happy to answer these when I can, and as long as I’m answering them anyway, I might as well post them here.

The latest is from Chris Warring:

My friend and I are having a debate over the question “If the Sun turned into a black hole, what would happen to the Earth’s orbit?”

I quoted from your article http://cosmology.berkeley.edu/Education/BHfaq.html  “What if the Sun *did* become a black hole for some reason? The Earth and the other planets would not get sucked into the black hole; they would keep on orbiting in exactly the same paths they follow right now….a black hole’s gravity is no stronger than that of any other object of the same mass.”

My friend argued that since astroids impact the Sun then they would also impact the black hole.  This would eventually increase the mass, increase the gravitational pull on the Earth, and place the Earth on a decaying orbit.

I have since read a little on Hawking Radiation, and that black holes evaporate.  I now wonder if the black hole that was our Sun would evaporate, losing gravitational effects on the Earth, and the Earth would end up drifting away from where our Sun use to be.

Here’s my answer:

First, let me say that all of the effects you mention are very small. They would alter the Earth’s orbit a little bit over very long times. When I wrote what I did about the Earth’s orbit, I wasn’t considering such tiny effects. But they’re fun to think about, so here goes.

It is true that, if the mass of the Sun (or black hole, whichever is at the center of the Solar System) goes up, then the Earth’s orbit will be affected. Specifically, it would move to a smaller orbit. And of course the reverse is true if the mass goes down.

First, let’s talk about what’s happening right now, and then consider what happens if the Sun turned into a black hole. Right now, things do crash into the Sun from time to time, increasing the mass of the Sun. On the other hand, there’s constant evaporation from the Sun’s atmosphere (as well as energy escaping in the form of sunlight, which translates into a mass loss via E = mc2). I’m pretty sure that the net effect right
now is that the Sun is gradually losing mass. Taken in isolation, this mass change would cause the Earth to drift gradually into a larger orbit.

That phrase “Taken in isolation” is important. There are other things that affect Earth’s orbit much more than this tiny mass loss rate. The main one is gravitational tugs from other planets, especially Jupiter. I
guess it must be true that the gradual mass loss of the Sun gradually makes all of the planets drift further out, although the details might be complicated.

There’s also the fact that the Earth is being bombarded by meteors. Those presumably slow the Earth down in its orbit. Taken in isolation, that effect would make the Earth spiral in towards the Sun.

I’ve never tried to work out the size of any of these effects. A lot is known about the effects of other planets’ gravitation on our orbit (the buzzword for this being Milankovich cycles). The other effects are much smaller.

Now, what would happen if the Sun became a black hole? Things like meteors would still get absorbed from time to time, but much less often than they do now. That may go against intuition, because we think of black holes as really good at sucking things in, but in fact the black hole has the same gravitational pull as the Sun on objects far away, and it’s a much smaller target, so fewer things actually hit it. So the rate
of mass increase due to stuff falling in will be less than it is now. On the other hand, stuff won’t be evaporating nearly as fast as it does now. (There would be Hawking radiation, but that’s incredibly small, much less than the rate at which atoms are boiling off the Sun now.) So the net effect would certainly be that the black hole would gradually go up in mass, whereas the Sun gradually goes down. The net result would be that the Earth would gradually get closer to the black hole.

But again, the key word is “gradually”: these are really really tiny effects. I’d bet that they’d be too small to have any noticeable effect even over the age of the Universe.