Quantum silliness from the BBC

Since I regularly express my envy at the quality of science journalism in England, I thought I should mention that the BBC has a particularly silly piece about quantum mechanics on its web site.

The experiment described in the article looks perfectly nice, but the results definitely do not “bend” any rules of quantum physics, nor does they “pull back the veil” on quantum reality. The results found by the experiment are precisely what we “knew” they must be all along — that is, they’re exactly what is predicted by standard quantum mechanics. Of course, that doesn’t mean the experiment isn’t worth doing — it’s always nice to test our theories, and learning better techniques for manipulating quantum systems is a great thing to do. But there are precisely no philosophical implications about the nature of reality here. As John Baez said a long time ago

Newspapers tend to report the results of these fundamentals-of-QM experiments as if they were shocking and inexplicable threats to known physics. It's gotten to the point where whenever I read the introduction to one of these articles, I close my eyes and predict what the reported results will be, and I'm always right. The results are always consistent with simply taking quantum mechanics at face value.

The new result fits perfectly into this category.

In the Dark

I haven’t posted much lately, so I thought I’d at least put in a plug for another blog I like: Peter Coles’s In the Dark. Peter Coles is a very well-respected cosmologist, with a lot of original and thought-provoking things to say on a bunch of subjects, including the Bayesian approach to probabilities (one of my obsessions), media coverage of science, physics puzzles, and a lot more. (I’ve just mentioned the science topics, but he writes about non-science stuff too.)

Science is all about replication of results, right?

Remember that paper published a while ago claiming evidence of precognition? I didn’t say anything about it, because I didn’t have anything much to say. Extraordinary claims require extraordinary evidence, Bayesian priors, and all that. You’ve heard it all before.

Here’s the thing. The right thing to do in this situation, as everyone knows, is for other scientists to try to replicate the result. Well, some did just that, tried to replicate the result, and couldn’t. It’s nice to see the system working, isn’t it? Well, it would be nice, except that the journal that published the original paper rejected their article, because they don’t publish replications of previous work. Ben Goldacre’s got the goods.

This is a structural problem with the way science is funded, disseminated, and rewarded. Even though everyone agrees that replication is essential in situations like these, it’s practically impossible to get a grant to merely replicate previous work, or to publish the results once you’ve done it. I don’t know what to do about that.

By the way, I’ve said this before, but let me say it again. In case you don’t know about Ben Goldacre, the great Guardian science writer and blogger, you should. He’s a national treasure. (Not my nation, unfortunately, but a national treasure nonetheless.)

Safe Zone

A couple of weeks ago, I finally got around to participating in one of the workshops for the University of Richmond’s Safe Zone program. For those who don’t know,

Safe Zone educates members of the University community about lesbian, gay, bisexual, and transgender (LGBT) issues to create a network of allies who, together with members of the LGBT community, work to create a community of safety and full inclusion for all its members.

I am now entitled to (and do!) display a sticker on my office door, letting people know that I’m “supportive and affirming of the LGBT community.” I urge UR faculty, staff, and students to do the same. Hard data are hard to come by, of course, but based on anecdotal information that seems credible to me, UR’s campus culture is not as comfortable a place for LGBT students as it could and should be. This is a little thing that each of us can do to show where we stand.

But I have to say that there’s one thing about this program that doesn’t feel quite right to me: the name “Safe Zone.” To be specific, the word “safe” seems to me to set the bar far too low.

It was clear from the workshop that the goal is to be active and committed advocates, whereas to me the word “safe” suggests merely promising to be harmless.

I don’t think this is a minor semantic quibble. I first encountered the term “Safe Zone” a number of years ago, when I was visiting another university and saw a few stickers on people’s office doors. My immediate conclusion was that that university must be a terrible place for the LGBT community, since it appeared that well over 90% of the offices on campus were “unsafe” for them.

I don’t think that the University of Richmond should be, inadvertently or otherwise, conveying a message that only a few places on campus are “safe” for LGBT community members. For one thing, I fervently hope and believe that that message is inaccurate. I know that there is a range of beliefs and attitudes about sexuality and gender issues, but at least in the parts of campus I know about (faculty and administrative offices, primarily), I think that a commitment to support of LGBT people is by far the norm.

More importantly, we should make clear that “safety” (at the very least!) for LGBT people is expected of all community members. Any faculty or staff member who makes LGBT students feel unsafe is guilty of professional misconduct. Walking around campus and seeing that a small fraction of offices are “safe” conveys precisely the opposite message, namely that we as an institution find “unsafeness” to be an acceptable and even normal state of affairs.

I’d like to suggest a simple name change. Those who have made the Safe Zone pledge should be called something affirmative, such as “allies” or some similar term. That seems to me to be better in every way: it’s more accurate, it conveys a more positive and supportive message, and it avoids the pitfall of defining “unsafeness” as the norm.

Just to make 100% sure there’s no confusion about this, let me repeat that I strongly support the Safe Zone program.  I think it’s a great thing to do, and if you’re a colleague or student of mine, I urge you to take part! I just think that a simple name change would make the program stronger.

Particle and wave?

My friend Allen Downey (whose blog, Probably Overthinking It, has a bunch of good stuff on how to think about statistics and data) sent me a mini-rant a while back about the way people often write and talk about quantum physics. He asked me what I thought and suggested it’d be a good topic to write about here. I agree that it’s a good topic. I’ll give a bit of introduction, and then just quote Allen (with his permission).

The topic is the grandiose way in which people often play up the mysteriousness of quantum physics. For about as long as there’s been quantum physics, people have been saying pseudo-profound things about What It All Means. I tend to agree with Allen that the more woo-woo descriptions are annoying at best and misleading at worst. (I’ve grumbled about some before.)

Allen specifically refers to this quote from David Brooks:

A few of the physicists mention the concept of duality, the idea that it is possible to describe the same phenomenon truthfully from two different perspectives. The most famous duality in physics is the wave-particle duality. This one states that matter has both wave-like and particle-like properties. Stephon Alexander of Haverford says that these sorts of dualities are more common than you think, beyond, say the world of quantum physics.

Brooks is writing about the answers to the Edge’s 2011 Question, “What scientific concept would improve everybody’s cognitive toolkit?”  (This is not The Edge from U2; it’s the website edge.org, which annually asks a broad question like this to a bunch of scientists.)

With that context, here’s Allen:

Physicists who wax philosophic about wave-particle duality are a pet peeve of mine, especially when they say nonsensical things like “light is both a particle and a wave,” as if that were (a) true, (b) a useful way to describe the situation, and (c) somehow profound, mystical and inspiring.  But none of those are true.

It seems to me to be (a) clearer, (b) not profound, and (c) true to say instead that light is neither a (classical) particle nor a wave.  It’s something else.  If you model it as a particle, you will get things approximately right in some circumstances, and completely wrong in others.  If you model it as a wave, same thing, different circumstances.  And if you model it as (modern) particle, you get good answers in almost all circumstances, but it seems likely that we will find circumstances where that model fails too (if we haven’t already).

So none of that is an example of “describ[ing] the same phenomenon truthfully from two different perspectives.”  It’s just plain old model selection.

This is quite right. An electron is not a particle (as that term is generally understood), nor is it a wave. It’s an excitation of a quantum field. Practically nobody has much intuition for what “an excitation of a quantum field” means, so it’s useful to have alternative descriptions that are more intuitive, albeit imperfect. Sometimes an excitation of a quantum field behaves like a particle, and sometimes it behaves like a wave.

By the way, when I say it’s useful to have such descriptions, I don’t just mean when talking to laypeople — although that’s part of what I mean. Actual physicists use these description in our own minds all the time. That’s perfectly fine, as long as we know the limits of how far we can push them.

But to go from “electrons sometimes behave like particles, and sometimes like waves” to “an electron is both a particle and a wave” is to mistake the maps for the territory. Here’s an analogy. Sometimes we model the economy as a machine (“economic engines”, “driving the economy”, “tinkering”). Sometimes we model it as a living thing (“green shoots”). But nobody makes the mistake of thinking that the economy is some kind of cyborg.

Stephon Alexander, the physicist Brooks refers to, isn’t particularly guilty of this sin. He actually does a pretty good job explaining what he means by duality in his answer (scroll down on that page to get to him):

In physics one of the most beautiful yet underappreciated ideas is that of duality. A duality allows us to describe a physical phenomenon from two different perspectives; often a flash of creative insight is needed to find both. However the power of the duality goes beyond the apparent redundancy of description. After all, why do I need more than one way to describe the same thing? There are examples in physics where either description of the phenomena fails to capture its entirety. Properties of the system ‘beyond’ the individual descriptions ’emerge’ …

Most of us know about the famous wave-particle duality in quantum mechanics, which allowes the photon (and the electron) to attain their magical properties to explain all of the wonders of atomic physics and chemical bonding. The duality states that matter (such as the electron) has both wave-like and particle like properties depending on the context. What’s weird is how quantum mechanics manifests the wave-particle duality. According to the traditional Copenhagen interpretation, the wave is a travelling oscillation of possibility that the electron can be realized omewhere as a particle.

There’s one major sin in this description: the word “magical.” Other than that, his description of what wave-particle duality actually means is just fine. One of the other respondents, Amanda Gefter, also writes about duality. She starts out OK, but then uses the notion as an egregiously silly metaphor for people disagreeing with each other. (Sadly, you can’t expect much from someone who works for New Scientist, which was once a good pop-science magazine but isn’t anymore.)

Baseball bats

It’s spring, when a young man’s fancy turns to baseball.

The NCAA (and high schools, I think) changed their standards for baseball bats this year, apparently in response to safety concerns about people (especially pitchers) being hit by fast-moving balls. The change took effect back in January and was decided on long before that, but I just heard about it.

Descriptions of the change are very confusing, at least to a naive physicist. An example:

So they adopted a standard called the Ball-Bat Coefficient of Restitution (BBCOR), which provides a more accurate measure of bats in lab tests than the old standard, the Ball Exit Speed Ratio (BESR). Rather than measure the ball’s speed off the bat, BBCOR testing measures the collision between the bat and the ball to see how lively the bat is.

That distinction is way too subtle for me! What does “how lively the bat is” mean, if it doesn’t mean how fast the ball leaves the bat?

To be more specific, the coefficient of restitution, by definition, is a measure of what fraction of the mechanical energy that was present before a collision remains after the collision. Having a standard that restricts the  speed of the ball (following a collision under controlled circumstances) is precisely the same thing as having a standard that restricts the energy after the collision (i.e., the coefficient of restitution).

Where’s the Physicist to the National League when you need him?

Of course, even if the two standards are essentially equivalent, changing from one to the other might be a way to tighten up the standard, without making it explicitly obvious that that’s what you’re doing. Maybe that’s all that’s going on here.

You can actually read the old and new standards, equations and all, if you feel like it. It turns out, as far as I can tell, the headline change, from “exit speed” to “coefficient of restitution,” really is a bit of a red herring. The COR is a bit of a cleaner thing to measure, because the old standard had to be measured on a sliding scale for different bat sizes, and the new one doesn’t, but fundamentally they’re measuring essentially the same thing.

The more important point is that they’ve also added an accelerated break-in procedure to the protocol. Apparently composite bats get springier over use (I guess as the materials get compressed). The old procedure tested them new; the new procedure breaks them in first, so that you can’t buy a standards-compliant bat and later end up with one that’s too springy for the standard.

Kepler stuff

About the Kepler mission’s announcement last week of tons of extrasolar planets:

1. A local Richmond TV station had me on the news to talk about the announcement. (The fact that they asked me primarily goes to show that astrophysicists are not exactly thick on the ground in Richmond.) I can’t stand to look at myself on video, but if you want to see it, go ahead.

2. Via Sean Carroll, some cool visualizations of the Kepler data, showing number of planets by size,  distance from star, temperature.

3. Sean says

A back-of-the-envelope calculation implies that there might be a million or so "Earth-like" planets in our Milky Way galaxy.

I’d go much higher than that. Kepler looked at about 150,000 stars and found five Earth-like planets (meaning roughly Earth-sized and in the habitable zone where liquid water could exist).  If you imagine that they had 100% efficiency — that is, that they found all the Earth-like planets there are in the sample — then one in 30,000 stars would have an Earth-like planet. Multiply by 100 billion stars in the galaxy, and you get about 3 million Earths.

But here’s the thing: Kepler’s efficiency can’t be more than about 1% or so. The mission works by looking for eclipses, which occur when the planet passes directly in front of the star as seen from Earth. That means that it only has a chance of detecting a planet if the geometry is fortuitously aligned. The probability of such an alignment occurring depends on the size of the star and of the planet’s orbit (in fact, it’s just the ratio of the two). For the actual Earth and Sun, the probability works out to 1%. Many of Kepler’s planets are closer in and have higher probabilities, but at best the geometrical alignment can only occur a few percent of the time on average.

Even with the right geometry, they don’t have a 100% chance of finding a planet, of course. Once you fold in all sources of inefficiency, I’d be very surprised if they have a better than 1% chance of finding any given Earth-like planet. I wouldn’t be surprised if it’s more like 0.1%

Just to be clear, that’s not a criticism of Kepler. It’s just an acknowledgment that this is a hard task they’ve set themselves!

So my back-of-the-envelope estimate is hundreds of millions, if not billions, of Earth-like planets in the Galaxy.

3D movies and the human eye

On Roger Ebert’s blog, the acclaimed film editor Walter Murch explains what he sees as insurmountable problems with 3D movies:

The biggest problem with 3D, though, is the “convergence/focus” issue. A couple of the other issues — darkness and “smallness” — are at least theoretically solvable. But the deeper problem is that the audience must focus their eyes at the plane of the screen — say it is 80 feet away. This is constant no matter what.

But their eyes must converge at perhaps 10 feet away, then 60 feet, then 120 feet, and so on, depending on what the illusion is. So 3D films require us to focus at one distance and converge at another. And 600 million years of evolution has never presented this problem before. All living things with eyes have always focussed and converged at the same point.

That’s an interesting idea. It’s true the convergence and focus are two separate processes: when you look at something close to you, your eyes tilt in towards each other (convergence), and each eye shifts its focus. The latter process is known as accommodation and involves flexing muscles in the eye to change the power of the lens.

It’s certainly true that 3D movies involve one but not the other, and it’s possible in principle that this has an effect on how we perceive them, but I wouldn’t have thought it was a significant effect in practice. This is way outside of my expertise, but here’s how it seems to me anyway.

The eye is set up so that, when the muscles are completely relaxed, you’re focusing on points that are extremely far away — “at infinity”, as they usually say. The closer you want to focus, the more you have to strain the muscles. The amount of strain is very small for a wide range of distances, shooting up sharply as the distances get small. Here’s a graph I mocked up:

accommodation.gif

The horizontal axis is the distance to the object you’re looking at, and the vertical axis is the amount of strain the muscles have to provide — to be specific, it’s the fractional change in power of the lens, Delta f/f. In case you’re wondering, I assumed the diameter of the eye is 25 mm and the person’s near point is 25 cm. The graph starts at the near point, so the highest point on the graph is the maximum accommodation the person’s eye is capable of.

The point is that the things you’re looking at in a 3D movie are pretty much always “far away”, as far as accommodation is concerned. The range of examples Murch gives, from 10 feet (i.e. 3 meters) on up, is a good example.  Note that the graph is very flat for this range.

If 3D movies routinely involved closeups of a book someone was reading, or the construction of a ship in a bottle, that’d be different. But they don’t. Most of the time, the point you’re looking at is far enough away to be practically at infinity, so your visual system should be expecting not to have to do any accommodation. And of course it doesn’t have to, because the screen is really quite far away (essentially at infinity).

So it seems implausible to me that the accommodation / convergence problem really matters. But this is very, very far from any area of expertise of mine, so maybe I’m wrong.