An appeals court in England has apparently ruled that Bayesian reasoning (also known as “correct reasoning”) about probability is invalid in the British court system.
The case was a civil dispute about the cause of a fire, and concerned an appeal against a decision in the High Court by Judge Edwards-Stuart. Edwards-Stuart had essentially concluded that the fire had been started by a discarded cigarette, even though this seemed an unlikely event in itself, because the other two explanations were even more implausible. The Court of Appeal rejected this approach although still supported the overall judgement and disallowed the appeal.
I learned about this from Peter Coles’s blog, which also mentions a similar ruling in a previous case.
From the judge’s ruling (via understandinguncertainty.org):
When judging whether a case for believing that an event was caused in a particular way is stronger that the case for not so believing, the process is not scientific (although it may obviously include evaluation of scientific evidence) and to express the probability of some event having happened in percentage terms is illusory.
The judge is essentially saying that Bayesian inference is unscientific, which is the exact opposite of the truth. The hallmark of science is the systematic use of evidence to update one’s degree of belief in various hypotheses. The only way to talk about that coherently is in the language of probability theory, and specifically in the language of Bayesian inference.
Apparently the judge believes that you’re only allowed to talk about probabilities for events that haven’t happened yet:
The chances of something happening in the future may be expressed in terms of percentage. Epidemiological evidence may enable doctors to say that on average smokers increase their risk of lung cancer by X%. But you cannot properly say that there is a 25 per cent chance that something has happened. Either it has or it has not.
The author of that blog post gives a good example of why this is nonsense:
So according to this judgement, it would apparently not be reasonable in a court to talk about the probability of Kate and William’s baby being a girl, since that is already decided as true or false.
Anyone who has ever played a card game should understand this. When I decide whether to bid a slam in a bridge game, or whether to fold in a poker game, or whether to take another card in blackjack, I do so on the basis of probabilities about what the other players have in their hands. The fact that the cards have already been dealt certainly doesn’t invalidate that reasoning.
I’m a mediocre bridge player and a lousy poker player, but I’d love to play with anyone who I thought would genuinely refuse to base their play on probabilistic reasoning.
Probabilities give a way, or rather the way, to talk precisely about degrees of knowledge in situations where information is incomplete. It doesn’t matter if the information is incomplete because some event hasn’t happened yet, or simply because I don’t know all about it.
By the way, Peter Coles makes a point that’s worth repeating about all this. Statisticians divide up into “Bayesian” and “frequentist” camps, but this sort of thing actually has very little to do with that schism:
First thing to say is that this is really not an issue relating to the Bayesian versus frequentist debate at all. It’s about a straightforward application of Bayes’ theorem which, as its name suggests, is a theorem; actually it’s just a straightforward consequence of the sum and product laws of the calculus of probabilities. No-one, not even the most die-hard frequentist, would argue that Bayes’ theorem is false.
Even if you’re a benighted frequentist in matters of statistical methodology, the way you think about probabilities still involves Bayes’s theorem.