An appeals court in England has apparently ruled that Bayesian reasoning (also known as “correct reasoning”) about probability is invalid in the British court system.

The case was a civil dispute about the cause of a fire, and concerned an appeal against a decision in the High Court by Judge Edwards-Stuart. Edwards-Stuart had essentially concluded that the fire had been started by a discarded cigarette, even though this seemed an unlikely event in itself, because the other two explanations were even more implausible. The Court of Appeal rejected this approach although still supported the overall judgement and disallowed the appeal.

I learned about this from Peter Coles’s blog, which also mentions a similar ruling in a previous case.

From the judge’s ruling (via understandinguncertainty.org):

When judging whether a case for believing that an event was caused in a particular way is stronger that the case for not so believing, the process is not scientific (although it may obviously include evaluation of scientific evidence) and to express the probability of some event having happened in percentage terms is illusory.

The judge is essentially saying that Bayesian inference is unscientific, which is the exact opposite of the truth. The hallmark of science is the systematic use of evidence to update one’s degree of belief in various hypotheses. The only way to talk about that coherently is in the language of probability theory, and specifically in the language of Bayesian inference.

Apparently the judge believes that you’re only allowed to talk about probabilities for events that haven’t happened yet:

The chances of something happening in the future may be expressed in terms of percentage. Epidemiological evidence may enable doctors to say that on average smokers increase their risk of lung cancer by X%. But you cannot properly say that there is a 25 per cent chance that something has happened. Either it has or it has not.

The author of that blog post gives a good example of why this is nonsense:

So according to this judgement, it would apparently not be reasonable in a court to talk about the probability of Kate and William’s baby being a girl, since that is already decided as true or false.

Anyone who has ever played a card game should understand this. When I decide whether to bid a slam in a bridge game, or whether to fold in a poker game, or whether to take another card in blackjack, I do so on the basis of probabilities about what the other players have in their hands. The fact that the cards have already been dealt certainly doesn’t invalidate that reasoning.

I’m a mediocre bridge player and a lousy poker player, but I’d love to play with anyone who I thought would genuinely refuse to base their play on probabilistic reasoning.

Probabilities give a way, or rather *the* way, to talk precisely about degrees of knowledge in situations where information is incomplete. It doesn’t matter if the information is incomplete because some event hasn’t happened yet, or simply because I don’t know all about it.

By the way, Peter Coles makes a point that’s worth repeating about all this. Statisticians divide up into “Bayesian” and “frequentist” camps, but this sort of thing actually has very little to do with that schism:

First thing to say is that this is really not an issue relating to the Bayesian versus frequentist debate at all. It’s about a straightforward application of Bayes’ theorem which, as its name suggests, is a theorem; actually it’s just a straightforward consequence of the sum and product laws of the calculus of probabilities. No-one, not even the most die-hard frequentist, would argue that Bayes’ theorem is false.

Even if you’re a benighted frequentist in matters of statistical methodology, the way you think about probabilities still involves Bayes’s theorem.

Ted, I think the problem here is the way the judges wrote the decision makes it sound like they’re ruling on the scientific validity of Bayesian reasoning, but they’re really ruling on the legal validity. When you use Bayesian reasoning to determine causation you’re essentially saying that X is the most likely cause of Y. But most likely doesn’t really rise to the level of legal certainty; you need direct evidence to conclude that this is indeed what happened. I think that’s the correct yardstick to use given that court decisions affect life, liberty, and property. I certainly would not want to end up convicted of a crime I didn’t commit based on Bayesian reasoning, although I’m perfectly comfortable with Astrophysicists using it to determine the origins of the universe.

At the risk of agreeing with Tim, I would suggest two ways a devout Bayesian might excavate a kernel of reason from the judge’s dung-heap of a decision.

First, it might be reasonable to consider an additional catch-all hypothesis for the “unknown unknowns.” If the argument is that A is probably true because, even though it is unlikely, the alternatives are less likely, I would want considerable assurance that I had exhausted the alternatives.

Secondly, we should not stop after computing the posterior probabilities, but also consider the asymmetric costs of wrongful conviction relative to failure to convict.

Maybe the judge’s requirement to consider the likelihood of A in absolute terms, without considering the likelihood of B and C, is a heuristic that at least approximates a Bayesian decision-making process that considers “unknown unknowns” and asymmetric costs.

The other thing here is that we don’t know enough about British jurisprudence. I’m guessing that the laws allow for consideration of scientific evidence, which thus requires the court to come up with a legal definition of “scientific.” Such an act should not, however, be construed as the judges ruling on the scientific method itself, which would be as ridiculous as having physicists make legal decisions.

I object to the distinction between valid scientific reasoning and valid legal reasoning. There is only valid (or invalid) reasoning. Outside of the confines of pure logic and mathematics, all reasoning is based on incomplete or uncertain knowledge, and the only way to understand that coherently is the language of probability theory.

I actually have no opinion about whether the judge’s ruling was right or wrong, not having studied the merits of the particular case. What I do say is that the stated logical basis for the ruling is incoherent. The final conclusion may be right for other reasons.

I think the problem could stem from the fact that the expert is applying the evidence to his own prior, where it is the purview of judge and jury, not expert, to judge the case. A Bayesian could perhaps tell the judge and jury how to correctly update their probabilities, but maybe the problem is that he applies his own prior (that the N alternatives have prior probabilities p_i, and all other alternatives have prior probabilities 0) instead of stating something like ‘If you take these to be the only possible alternatives, the evidence points to X as the most likely alternative.”

“I object to the distinction between valid scientific reasoning and valid legal reasoning. There is only valid (or invalid) reasoning.”Of course. However, such reasoning can and should be applied differently. Ideally, a scientist is not attached to a hypothesis, but rather regards those as provisionally true which have a certain probability. If the opposite hypothesis had the same probability, he would have the same confidence in it. In a legal situation, as a matter of principle on the hypothesis “defendant is guilty” can be proved at all (with some level of confidence); “defendant is innocent” cannot be proved, but rather is assumed if “defendant is guilty” has a low probability, where “low” here is actually quite high, since it is generally regarded that it is worse to punish an innocent defendant wrongly rather than let a guilty defendant go.

Of course, these are just general remarks and have no direct bearing on the case discussed.