Inspired by a new draft report from the Intergovernmental Panel on Climate Change, Slate has reposted an article by Daniel Engber from 2007 on how the IGPP quantifies uncertainty in probabilistic language. My colleague Matt Trawick pointed this out to me, rightly observing that it’s just the sort of thing I care about. Talking about science (not to mention talking about most other subjects) means talking about things where your knowledge is incomplete or uncertain. The only way to do that coherently is in the language of probability theory, so everyone should think about what probabilities mean.
Engber raises some very important issues, about which people (including scientists) are often confused. I either don’t understand his final conclusion, or else I just disagree with it, but either way he gives a lot of good information and insight en route to that conclusion. It’s definitely worth reading and thinking about.
The impetus behind the article was the decision by the IPCC to attach specific probabilities to various statements (such as “The Earth is getting warmer” and “Human activity is the cause”), as opposed to more general phrases such as “very likely.” According to Engber,
From a policy perspective, this sounds like a great idea. But when Moss and Schneider first made their recommendations, many members of the climate-change panel were justifiably reluctant to go along. As scientists, they’d been trained to draw statistical conclusions from a repeated experiment, and use percentages to describe their certainty about the results.
But that kind of analysis doesn’t work for global warming. You can’t roll the Earth the way you can a pair of dice, testing it over and over again and tabulating up the results.
It saddens (but does not surprise) me to think of scientists thinking this way. You don’t need repeated experiments to express things in terms of probabilities. If you did, you couldn’t talk about the probability that Marco Rubio will be the next US President, the probability of any particular large asteroid striking the Earth, or even the probability that it will rain tomorrow. But you can.
Engber immediately follows this with something quite correct and important:
At best, climate scientists can look at how the Earth changes over time and build simplified computer models to be tested in the lab. Those models may be excellent facsimiles of the real thing, and they may provide important, believable evidence for climate change. But the stats that come out of them—the percentages, the confidence intervals—don’t apply directly to the real thing.
It is true that a model is different from the system being modeled, and that you can’t simply take the probability that something happens in the model to be the same as the probability that it will happen in real life. In reality, you have to do a more complicated sum:
Probability that X will happen
=
(Probability that my model accurately reflects reality) x (Probability that X will happen, according to my model)
+
(Probability that my model does not accurately reflect reality) x (Probability that X will happen in this case)
Three of the four terms in that expression are things that you have to estimate subjectively (although two of them add to 1, so there are really only two things to be estimated). Your model (probably a great big computer simulation) tells you the other one (Probability that X will happen, according to my model).
The fact that you have to make subjective estimates in order to say interesting things about the real world is sad, but there’s nothing to be done about it.
People who don’t like the Bayesian approach to scientific reasoning use this subjectivity as a reason to disparage it, but that’s an error. Whether you use the word “probability” or not, your degree of belief in most interesting scientific questions is somewhere between complete certainty and complete disbelief. If someone asks you to characterized the degree to which you believe, say, that human activity is warming the planet, your answer to this question will depend on your (subjective) degree of belief in a bunch of other things (how likely the scientists are to have done their calculations correctly, how likely it is that there’s a massive conspiracy to hide the truth, etc.). The fact that Bayesian methods incorporate this sort of subjectivity is not a flaw in the methods; it’s inevitable.
Some people seem to think that you shouldn’t use the precise-sounding mathematical machinery of probability theory in situations involving subjectivity, but that has never made any sense to me: you can’t avoid the subjectivity, so you’re choosing between precise-and-subjective and vague-and-subjective. Go with precise every time.
Engber has a nice discussion of the distinction between “statistical” and “subjective” notions of probability, including some history that I didn’t know. In particular, Poisson seems to have been the first to recognize this distinction in the early 1800s, referring to the two as chance and raison de croire (reason to believe). Some people are queasy about using actual numbers for the second category, as they IPCC is doing, but they shouldn’t be.
Here’s Engber being very very right:
The process of mapping judgments to percentages has two immediate benefits. First, there’s no ambiguity of meaning; politicians and journalists aren’t left to make their own judgments about the state of the science on climate change. Second, a consistent use of terms makes it possible to see the uptick in scientific confidence from one report to the next; since 2001, we’ve gone from “likely” to “very likely,” and from 66 percent to 90 percent.
But I think he goes astray here:
But the new rhetoric of uncertainty has another effect—one that provides less clarity instead of more. By tagging subjective judgments with percent values, the climatologists erode the long-standing distinction between chance and raison de croire. As we read the report, we’re likely to assume that a number represents a degree of statistical certainty, rather than an expert’s confidence in his or her opinion. We’re misled by our traditional understanding of percentages and their scientific meaning.
To the charge that the IPCC is eroding the distinction, I say “Erode, baby, erode!” Raisons de croire are probabilities, and we shouldn’t be shy about admitting it.
Engber’s final sentence:
However valid its conclusions, the report toys with our intuitions about science—that a number is more precise than a word, that a statistic is more accurate than a belief.
I don’t understand the first objection. A number is more precise than a word. I’m missing the part where this is a bad thing. The last clause is so muddled that I don’t know how to interpret it: a probability (which I think is what he means by “statistic”) is a way of characterizing a belief.