Stephanie Coontz has a piece in the Sunday Review section of today’s *New York Times* under the headline “When Numbers Mislead” about times when the average of some quantity can prove to be misleading. I’m always glad to see articles about quantitative reasoning show up in the mainstream press, and I think she gets most of the important things right. Way to go, *Times*!

Coontz starts with the observation that a small number of outliers can make the average (specifically the mean) value different from the typical value: if Warren Buffett moves to a small town, the average wealth goes way up, but the typical person’s situation hasn’t changed. This is familiar to people who regularly think about this sort of thing, but lots of people have never thought about it, and it’s worth repeating.

She then gives an example of a way that this can lead to incorrect policy decisions:

Outliers can also pull an average down, leading social scientists to overstate the risks of particular events.

Most children of divorced parents turn out to be as well adjusted as children of married parents, but the much smaller number who lead very troubled lives can lower the average outcome for the whole group, producing exaggerated estimates of the impact of divorce.

I actually think she’s got something wrong here. I don’t think that using the average unhappiness leads to “exaggerated” estimates. The total unhappiness caused by divorces is the number of divorces times the average unhappiness per divorce. That’s true whether the unhappiness is evenly spread out or is concentrated in a small number of people.

Knowing whether the distribution is smooth or lumpy may well be important — the optimal policy may be different in the two cases — but not because one is “exaggerated”.

Coontz cites a variety of other examples of times when it’s important to be careful when thinking about averages. For instance, there’s a lot of variation in how people respond to a major loss, so we should be too quick to pathologize people whose response is different from the “usual.”

When we assume that “normal” people need “time to heal,” or discourage individuals from making any decisions until a year or more after a loss, as some grief counselors do, we may be giving inappropriate advice. Such advice can cause people who feel ready to move on to wonder if they are hardhearted.

Another: Married people are happier than unmarried people, but that doesn’t mean that marriage causes happiness. It turns out that the married people tended to be happier before they got married.

The last one is a very important point, but it seems oddly out of place in this article. This is not an “averages-are-misleading” example at all. It’s a classic “correlation-is-not-causation”. These are both important points, but I got kind of confused by the way Coontz blended them together.

By the way, I’m unable to mention correlation and causation without referring to this:

Actually, that’s my main quibble with the article: Coontz talks about a bunch of different things (the mean of a skewed distribution is different from the typical value, sometimes variances are large so the mean isn’t typical, correlation is not causation), so the article feels a bit like a grab-bag of pitfalls in statistical reasoning rather than a coherent whole.

But that’s OK: there are lots of worse things you could put in the pages of the *Times* than a grab-bag of pitfalls in statistical reasoning.

Diverse comments:

“should be too quick to pathologize people”Is there a “not” missing?Correlation and causation: Most people who die are married.

A good article on a similar theme is Stephen Jay Gould’s essay “The median is not the message”, which also has a great title. Have you read it?

I think that I did read that, a long time ago, but I don’t remember much about it. In general, that’s the sort of thing that Gould does very well. I’ll hunt down a copy.

Oh, and yes, there’s a “not” missing.