Archive for February, 2013

Bayes banned in Britain

Thursday, February 28th, 2013

An appeals court in England has apparently ruled that Bayesian reasoning (also known as “correct reasoning”) about probability is invalid in the British court system.

The case was a civil dispute about the cause of a fire, and concerned an appeal against a decision in the High Court by Judge Edwards-Stuart. Edwards-Stuart had essentially concluded that the fire had been started by a discarded cigarette, even though this seemed an unlikely event in itself, because the other two explanations were even more implausible. The Court of Appeal rejected this approach although still supported the overall judgement and disallowed the appeal.

I learned about this from Peter Coles’s blog, which also mentions a similar ruling in a previous case.

From the judge’s ruling (via understandinguncertainty.org):

When judging whether a case for believing that an event was caused in a particular way is stronger that the case for not so believing, the process is not scientific (although it may obviously include evaluation of scientific evidence) and to express the probability of some event having happened in percentage terms is illusory.

The judge is essentially saying that Bayesian inference is unscientific, which is the exact opposite of the truth. The hallmark of science is the systematic use of evidence to update one’s degree of belief in various hypotheses. The only way to talk about that coherently is in the language of probability theory, and specifically in the language of Bayesian inference.

Apparently the judge believes that you’re only allowed to talk about probabilities for events that haven’t happened yet:

The chances of something happening in the future may be expressed in terms of percentage. Epidemiological evidence may enable doctors to say that on average smokers increase their risk of lung cancer by X%. But you cannot properly say that there is a 25 per cent chance that something has happened. Either it has or it has not.

The author of that blog post gives a good example of why this is nonsense:

So according to this judgement, it would apparently not be reasonable in a court to talk about the probability of Kate and William’s baby being a girl, since that is already decided as true or false.

Anyone who has ever played a card game should understand this. When I decide whether to bid a slam in a bridge game, or whether to fold in a poker game, or whether to take another card in blackjack, I do so on the basis of probabilities about what the other players have in their hands. The fact that the cards have already been dealt certainly doesn’t invalidate that reasoning.

I’m a mediocre bridge player and a lousy poker player, but I’d love to play with anyone who I thought would genuinely refuse to base their play on probabilistic reasoning.

Probabilities give a way, or rather the way, to talk precisely about degrees of knowledge in situations where information is incomplete. It doesn’t matter if the information is incomplete because some event hasn’t happened yet, or simply because I don’t know all about it.

By the way, Peter Coles makes a point that’s worth repeating about all this. Statisticians divide up into “Bayesian” and “frequentist” camps, but this sort of thing actually has very little to do with that schism:

First thing to say is that this is really not an issue relating to the Bayesian versus frequentist debate at all. It’s about a straightforward application of Bayes’ theorem which, as its name suggests, is a theorem; actually it’s just a straightforward consequence of the sum and product laws of the calculus of probabilities. No-one, not even the most die-hard frequentist, would argue that Bayes’ theorem is false.

Even if you’re a benighted frequentist in matters of statistical methodology, the way you think about probabilities still involves Bayes’s theorem.

Fun with electrostatics

Thursday, February 21st, 2013

I’m teaching our course in Electricity and Magnetism this semester, and even though I’ve done this plenty of times before, I still learn new things each time. Here are a couple from this semester.

1. Charged conducting disk.

Suppose I take a thin disk of radius R, made out of a conducting material, and put a given amount of charge Q on it. How does the charge distribute itself?  (In case it’s not clear, the disk lives in three-dimensional space but has negligible thickness. In other words, it’s a cylinder of radius R and height h, in the limit h << R.)

This turns out to have a surprisingly simple answer. Take a sphere of radius R, and distribute the charge uniformly over the surface. Now smash the sphere down to a disk by moving each element of surface area straight down parallel to the z axis. The resulting charge density is the answer.

My friend and colleague Ovidiu Lipan showed me a proof of this, and then I verified it numerically using Mathematica, so I’m confident it’s right. But I still have the feeling there’s more to the story than this. This result is simple enough that it seems like there should be a satisfying, intuitive reason why it’s true. Although Ovidiu’s proof is quite clever and elegant, it doesn’t give me the feeling that I understand why the result came out in this neat way. I’d love to hear any ideas.

Update: These notes by Kirk McDonald have the answer I was looking for. I’ll try to write a more detailed explanation at some point.

2. Electric field lines near a conducting sphere.

Take a conducting sphere and place it in a uniform external electric field. Find the resulting potential and electric field everywhere outside the sphere.

This is a classic problem in electrostatics. I’ve assigned it plenty of times before, but I learned a little something new about it, once again from the exceedingly clever Ovidiu Lipan (who apparently got it from an old book by Sommerfeld).

You can calculate the answer using standard techniques (separation of variables in spherical coordinates). The external field induces negative charge on the bottom of the sphere and positive charge on the top, distorting the field lines until they look like this:

 

This picture looks just as you’d expect. In particular, one of the first things you learn about electrostatics is that field lines must strike the surface of a conductor perpendicular to the surface.

Let’s zoom in on the region near the sphere’s equator:

The field lines either strike the southern hemisphere, emanate from the northern hemisphere, or miss the sphere entirely. All is as it should be.

Or is it? Let me put in a couple of additional field lines:

 

The curves in red are legitimate electric field lines (i.e., the electric field at each point is exactly tangent to the curve), but they don’t hit the surface at a right angle as they’re supposed to. You can actually write down an exact equation for these lines and verify that they come it at 45-degree angles right up to the edge of the sphere.

We constantly repeat to our students that electric field lines have to hit conductors at right angles. So is this a lie?

Ultimately, it’s a matter of semantics. You can say if you want that those red field lines don’t actually make it to the surface: right at the sphere’s equator, the electric field drops to zero, so you could legitimately say that the field line extends all the way along an open interval leading up to the sphere’s edge, but doesn’t include that end point. This means you have to allow an exception to another familiar rule: electric field lines always start at positive charges and end at negative charges (unless they go to infinity). Here we have field lines that just peter out at a place where the charge density is zero.

Alternatively, you can say that there’s an exception to the rule that says electric field lines have to hit conductors at right angles: they have to do this only if the field is nonzero at the point of contact. After all, the “must-hit-perpendicular” is really a rephrasing of the rule that says that the tangential component of the electric field must be zero at a conductor. The latter version is still true, but it implies the former version only if the perpendicular component is nonzero.

Matthew Crawley, time traveler

Monday, February 18th, 2013

People sometimes ask me whether, as a scientist, I get annoyed by scientific inaccuracies in movies. Sometimes, I guess, but not usually. In science fiction, I think it’s fine if the fictional world you’ve created breaks the rules of our actual universe, as long as it consistently follows its own rules.

I actually find  linguistic inaccuracies in historical fiction more annoying than scientific inaccuracies in science fiction. For some reason, linguistic anachronisms raise my nerd hackles.

I was thinking about this recently because, as a middle-aged Prius-driving NPR listener, I am legally obliged to watch Downton Abbey. 

Twice during the current season, characters have described themselves as being “on a learning curve,” in one case “on a steep learning curve.” This phrase is irritating enough in contemporary parlance, but in the mouths of 1920s characters it’s all the more grating.

I posted a gripe about it on Facebook, with some actual data from the awesome Google Ngram viewer:


A friend made the fair observation that “learning curve” goes back further than “steep learning curve”:

If Matthew Crawley and Tom Branson were linguistic trendsetters, maybe they could have been saying this in the 1920s, but I’m not buying it, and Ngram will help me justify my skepticism. I knew Ngram had an amazingly huge corpus of searchable books, but it turns out to be more powerful and flexible than I’d realized.

I suspect that the term was only used in a technical context, to describe actual graphs of learning, until much, much later. The citations of the phrase in the OED are consistent with this, but there are only a couple of them, so that’s not much evidence. Ngram can help with this: we can look at the phrase’s popularity in just fiction, rather than in all books.

The phrase takes off in fiction in the last couple of decades, around when (I claim) people actually started saying it in non-technical contexts. Also, in the early years, it was almost entirely an American phrase:

 

(That’s all books, not just fiction.)

Another Downton linguistic anachronism: “contact” as a verb. I’d originally searched Ngram for “contacted” to illustrate this, but it turns out that you can tell it to search for words used as particular parts of speech. Here’s “contact” as both verb and noun:

(Verb in blue, amplified by a factor of 10 for ease of viewing.)

The fact that you can do this strikes me as quite impressive technically. Not only have they scanned in all these books and converted them to searchable text, but they’ve parsed the sentences to identify all the parts of speech. That’s a nontrivial undertaking. As Groucho Marx apparently never said, “Time flies like an arrow; fruit flies like a banana.”

The phrase “have a crush on” also came up and sounded off to me, but I was pretty much wrong on that. It was starting to become common in the 1920s, although it was mostly American:

In hindsight, I should have known that: the Gershwin song “I’ve got a crush on you” dates from a bit later than Downton, but not that much.

Local TV news

Saturday, February 16th, 2013

Funny thing about living in a smallish city: I’m the only astrophysicist in town, so the news media sometimes ask me about space stories in the news. Here’s the latest.

Richmond’s Channel 8 talked to me about the Russian meteor. Video here and here.

I’m annoyed that I seem to have said this event was similar in size to Tunguska, when in fact it was much smaller. But I was doing this on virtually no preparation, so I guess it could have been worse.

What is a solid?

Thursday, February 14th, 2013

Potter Stewart would say “I know it when I see it,” but would he? The line between a solid and an extremely viscous liquid isn’t necessarily clear.

People sometimes say that glass is a liquid, citing as evidence the fact that the windows in medieval churches are thicker at the bottom, which suggests that the glass in the windows has been gradually flowing down over the centuries. But that appears to be a misunderstanding. From the venerable Usenet Physics FAQ list:

It is sometimes said that glass in very old churches is thicker at the bottom than at the top because glass is a liquid, and so over several centuries it has flowed towards the bottom.  This is not true.  In Mediaeval times panes of glass were often made by the Crown glass process.  A lump of molten glass was rolled, blown, expanded, flattened and finally spun into a disc before being cut into panes.  The sheets were thicker towards the edge of the disc and were usually installed with the heavier side at the bottom.  Other techniques of forming glass panes have been used but it is only the relatively recent float glass processes which have produced good quality flat sheets of glass.

The FAQ entry contains a bunch of references, including this AJP article, for those who want to geek out on this subject.

I was reminded of this when a friend pointed me to the pitch drop experiment at the University of Queensland. Someone put some pitch, which has a viscosity 1011 times that of water, in a funnel in 1930, and they’ve been letting it drip out ever since, at a rate of one drop every 10 years or so.

The next drop is due soon. You can hope to catch it on a live video feed at the site.

Here’s a picture of the experiment:

 

 

And here’s what happens when you hit the pitch with a hammer:

 

I think you could be forgiven for mistaking this stuff for a solid.