At the end of every hard-earned day, the IPCC finds some reason to believe

Inspired by a new draft report from the Intergovernmental Panel on Climate ChangeSlate has reposted an article by Daniel Engber from 2007 on how the IGPP quantifies uncertainty in probabilistic language. My colleague Matt Trawick pointed this out to me, rightly observing that it’s just the sort of thing I care about. Talking about science (not to mention talking about most other subjects) means talking about things where your knowledge is incomplete or uncertain. The only way to do that coherently is in the language of probability theory, so everyone should think about what probabilities mean.

Engber raises some very important issues, about which people (including scientists) are often confused. I either don’t understand his final conclusion, or else I just disagree with it, but either way he gives a lot of good information and insight en route to that conclusion. It’s definitely worth reading and thinking about.

The impetus behind the article was the decision by the IPCC to attach specific probabilities to various statements (such as “The Earth is getting warmer” and “Human activity is the cause”), as opposed to more general phrases such as “very likely.” According to Engber,

From a policy perspective, this sounds like a great idea. But when Moss and Schneider first made their recommendations, many members of the climate-change panel were justifiably reluctant to go along. As scientists, they’d been trained to draw statistical conclusions from a repeated experiment, and use percentages to describe their certainty about the results.

But that kind of analysis doesn’t work for global warming. You can’t roll the Earth the way you can a pair of dice, testing it over and over again and tabulating up the results.

It saddens (but does not surprise) me to think of scientists thinking this way. You don’t need repeated experiments to express things in terms of probabilities. If you did, you couldn’t talk about the probability that Marco Rubio will be the next US President, the probability of any particular large asteroid striking the Earth, or even the probability that it will rain tomorrow. But you can.

Engber immediately follows this with something quite correct and important:

At best, climate scientists can look at how the Earth changes over time and build simplified computer models to be tested in the lab. Those models may be excellent facsimiles of the real thing, and they may provide important, believable evidence for climate change. But the stats that come out of them—the percentages, the confidence intervals—don’t apply directly to the real thing.

It is true that a model is different from the system being modeled, and that you can’t simply take the probability that something happens in the model to be the same as the probability that it will happen in real life. In reality, you have to do a more complicated sum:

Probability that X will happen

=

(Probability that my model accurately reflects reality) x (Probability that X will happen, according to my model)

+

(Probability that my model does not accurately reflect reality) x (Probability that X will happen in this case)

Three of the four terms in that expression are things that you have to estimate subjectively (although two of them add to 1, so there are really only two things to be estimated). Your model (probably a great big computer simulation) tells you the other one (Probability that X will happen, according to my model).

The fact that you have to make subjective estimates in order to say interesting things about the real world is sad, but there’s nothing to be done about it.

People who don’t like the Bayesian approach to scientific reasoning use this subjectivity as a reason to disparage it, but that’s an error. Whether you use the word “probability” or not, your degree of belief in most interesting scientific questions is somewhere between complete certainty and complete disbelief. If someone asks you to characterized the degree to which you believe, say, that human activity is warming the planet, your answer to this question will depend on your (subjective) degree of belief in a bunch of other things (how likely the scientists are to have done their calculations correctly, how likely it is that there’s a massive conspiracy to hide the truth, etc.). The fact that Bayesian methods incorporate this sort of subjectivity is not a flaw in the methods; it’s inevitable.

Some people seem to think that you shouldn’t use the precise-sounding mathematical machinery of probability theory in situations involving subjectivity, but that has never made any sense to me: you can’t avoid the subjectivity, so you’re choosing between precise-and-subjective and vague-and-subjective. Go with precise every time.

Engber has a nice discussion of the distinction between “statistical” and “subjective” notions of probability, including some history that I didn’t know. In particular, Poisson seems to have been the first to recognize this distinction in the early 1800s, referring to the two as chance and raison de croire (reason to believe). Some people are queasy about using actual numbers for the second category, as they IPCC is doing, but they shouldn’t be.

Here’s Engber being very very right:

The process of mapping judgments to percentages has two immediate benefits. First, there’s no ambiguity of meaning; politicians and journalists aren’t left to make their own judgments about the state of the science on climate change. Second, a consistent use of terms makes it possible to see the uptick in scientific confidence from one report to the next; since 2001, we’ve gone from “likely” to “very likely,” and from 66 percent to 90 percent.

But I think he goes astray here:

But the new rhetoric of uncertainty has another effect—one that provides less clarity instead of more. By tagging subjective judgments with percent values, the climatologists erode the long-standing distinction between chance and raison de croire. As we read the report, we’re likely to assume that a number represents a degree of statistical certainty, rather than an expert’s confidence in his or her opinion. We’re misled by our traditional understanding of percentages and their scientific meaning.

To the charge that the IPCC is eroding the distinction, I say “Erode, baby, erode!”  Raisons de croire are probabilities, and we shouldn’t be shy about admitting it.

Engber’s final sentence:

However valid its conclusions, the report toys with our intuitions about science—that a number is more precise than a word, that a statistic is more accurate than a belief.

I don’t understand the first objection. A number is more precise than a word. I’m missing the part where this is a bad thing. The last clause is so muddled that I don’t know how to interpret it: a probability (which I think is what he means by “statistic”) is a way of characterizing a belief.

 

Calculate or simulate?

A couple of weeks ago, Slate published two oddly similar articles within a few days of each other: Extra Points Are For Losers and The Supreme Court Justice Death Calculator.

You might not initially think the articles have that much in common: one’s about football strategy and one’s about the future of the US Supreme Court. But beneath the surface they’re practically the same: both are calculations of joint probabilities for multiple events.

The football article points out that in a certain situation it’s provably the right strategy for an NFL team to go for a two-point conversion instead of a single extra point. Specifically, if you’re down 14 points near the end of the game, and you score a touchdown, you should go for 2.

You can go to the article for details, but here’s the idea. The decision only matters under the assumption that you’re going to get another touchdown and the other team isn’t going to score. So let’s assume that that’s going to happen. If you take the usual extra point each time, you’re going into overtime (assuming your kicker always gets the extra point, which is roughly true in pro football), and you’ve got a 50-50 shot at a win . On the other hand, if you go for the two-point conversion, either you make it and are guaranteed a win, or you miss it, in which case you go for 2 again the next time and have a chance to throw the game into overtime again. You can check by a straightforward calculation that the second strategy gives you better than even odds of winning (for reasonable estimates of the likelihood of a two-point conversion).

The Supreme Court article contains a slightly macabre app that lets you calculate the probabilities of various combinations of Supreme Court justices dying during the second Obama administration. Want to know the odds that Obama will get to appoint a replacement for one of the conservative justices? One of the liberals? One of each? You can play around to your heart’s content.

The math underlying both of these calculations is exactly the same. It’s pretty much just

P(A and B) = P(A) P(B).

That is, to get the probability that two independent events both occur, you multiply together the probabilities of each one.

Oddly, the two articles take different approaches to calculating the probabilities. The football article just calculates them directly by doing the multiplication, but the Supreme Court article estimates them by simulation. If you ask the app for the probability that both Scalia and Kagan will die, it runs 10,000 simulations of the future and counts up the results.

That’s actually not a very good thing to do, especially if you’re interested in low-probability combinations such as this one. If you ask the app that question repeatedly, the numbers bounce around: 0.32%,  0.26%, 0.27%, 0.41%.

In the guts of the program there must be probabilities for each of the individual events, so you could answer this question by simply multiplying them together. The result wouldn’t jump around like that and would be more precise than the simulation-based one (although not necessarily more accurate — as the article points out, the calculation is based on some assumptions that might not be correct).

You could reduce the scatter by raising the number of simulations, of course, but it’s odd to use simulations in this situation to begin with. Estimating probabilities via simulation is a great tool when it’s impossible or difficult to calculate the probabilities exactly, but in a situation like this it’s quicker, simpler, and more precise to just calculate them.

 

Kahneman on taxis

The BBC podcast More or Less recently ran an interview with Daniel Kahneman, the psychologist who won a Nobel Prize in economics.

He tells two stories to illustrate some point about people’s intuitive reasoning about probabilities. Here’s a rough, slightly abridged transcript of the relevant part:

I will tell you about a city in which two taxi companies operate. One of the companies operates green cars, and the other operates blue cars. 85% of the cars are green, and 15% are blue. There was a hit-and-run accident at night that clearly involved a taxi, and there was a witness who thought that the car was blue. They tested the accuracy of the witness, and they showed that under similar conditions, the witness was accurate 80% of the time. What is your probability that the cab in question was blue, as the witness said, when blue is the minority company?

Here is a slight variation. The two taxi companies have equal numbers of cabs, but 85% of the accidents are due to the green taxis, and 15% are due to the blue taxis. The rest of the story is the same. Now what is the probability that the cab in the accident was blue?

Let’s not bother doing a detailed calculation. Instead, let me ask a qualitative multiple-choice question. Which of the following is true?

  1. The probability that the cab is blue is greater in the first scenario than the second.
  2. The probability that the cab is blue is greater in the second scenario than the first.
  3. The two probabilities are equal.

This pair of stories is supposed to illustrate ways in which people’s intuition fails them. Supposedly, most people’s intuition strongly leads them to one of the incorrect answers above, and they find the correct one quite counterintuitive. Personally, I found the correct answer to be the intuitive one, but that’s probably because I’ve spent too much time thinking about this sort of thing.

I wanted to leave a bit of space before revealing the correct answer, but here it is:

Continue reading Kahneman on taxis