Left-handed people don’t die young

The New Yorker has a rundown of the studies looking for links between left-handedness and various other traits. Contrary to beliefs of 100 years ago, left-handed people aren’t more likely to be criminals or schizophrenic, but we do do better, on average, in certain kinds of cognitive tests. So there.

A long time ago, I used to hear about how, as a left-hander, I should expect to die young and poor. As the New Yorker piece points out, that was debunked a long time ago. Studies showing that left-handers die younger than right-handers reached that conclusion due to a problem well known to astrophysicists: selection bias.

Some stars are giant stars, and some stars aren’t. Suppose that you tried to figure out what percentage of stars were giants. If you do a survey of the nearest stars, you get one answer, but if you do a survey of very distant stars, you get a different answer: in the latter case, you find a higher percentage of giants. You might conclude that something in our local environment stops giant stars from forming. But in fact there’s a different explanation: it’s easier to spot giant stars than small stars. When we’re looking nearby, we see all the stars, but when we’re looking far away, we miss some, and the ones we miss tend not to be giants.

The same thing happened with studies of left-handed people. When we look at young people, we find all the left-handers, but when we look at old people, we miss some, because in the past left-handed people used to be “converted” to right-handedness. (This happened to my uncle, for instance.) So when you look at a sample of old people, you think that a bunch of lefties have gone missing. If you didn’t take into account the fact that lefties used to be converted, you’d think that that meant that lefties die young.

 

At the end of every hard-earned day, the IPCC finds some reason to believe

Inspired by a new draft report from the Intergovernmental Panel on Climate ChangeSlate has reposted an article by Daniel Engber from 2007 on how the IGPP quantifies uncertainty in probabilistic language. My colleague Matt Trawick pointed this out to me, rightly observing that it’s just the sort of thing I care about. Talking about science (not to mention talking about most other subjects) means talking about things where your knowledge is incomplete or uncertain. The only way to do that coherently is in the language of probability theory, so everyone should think about what probabilities mean.

Engber raises some very important issues, about which people (including scientists) are often confused. I either don’t understand his final conclusion, or else I just disagree with it, but either way he gives a lot of good information and insight en route to that conclusion. It’s definitely worth reading and thinking about.

The impetus behind the article was the decision by the IPCC to attach specific probabilities to various statements (such as “The Earth is getting warmer” and “Human activity is the cause”), as opposed to more general phrases such as “very likely.” According to Engber,

From a policy perspective, this sounds like a great idea. But when Moss and Schneider first made their recommendations, many members of the climate-change panel were justifiably reluctant to go along. As scientists, they’d been trained to draw statistical conclusions from a repeated experiment, and use percentages to describe their certainty about the results.

But that kind of analysis doesn’t work for global warming. You can’t roll the Earth the way you can a pair of dice, testing it over and over again and tabulating up the results.

It saddens (but does not surprise) me to think of scientists thinking this way. You don’t need repeated experiments to express things in terms of probabilities. If you did, you couldn’t talk about the probability that Marco Rubio will be the next US President, the probability of any particular large asteroid striking the Earth, or even the probability that it will rain tomorrow. But you can.

Engber immediately follows this with something quite correct and important:

At best, climate scientists can look at how the Earth changes over time and build simplified computer models to be tested in the lab. Those models may be excellent facsimiles of the real thing, and they may provide important, believable evidence for climate change. But the stats that come out of them—the percentages, the confidence intervals—don’t apply directly to the real thing.

It is true that a model is different from the system being modeled, and that you can’t simply take the probability that something happens in the model to be the same as the probability that it will happen in real life. In reality, you have to do a more complicated sum:

Probability that X will happen

=

(Probability that my model accurately reflects reality) x (Probability that X will happen, according to my model)

+

(Probability that my model does not accurately reflect reality) x (Probability that X will happen in this case)

Three of the four terms in that expression are things that you have to estimate subjectively (although two of them add to 1, so there are really only two things to be estimated). Your model (probably a great big computer simulation) tells you the other one (Probability that X will happen, according to my model).

The fact that you have to make subjective estimates in order to say interesting things about the real world is sad, but there’s nothing to be done about it.

People who don’t like the Bayesian approach to scientific reasoning use this subjectivity as a reason to disparage it, but that’s an error. Whether you use the word “probability” or not, your degree of belief in most interesting scientific questions is somewhere between complete certainty and complete disbelief. If someone asks you to characterized the degree to which you believe, say, that human activity is warming the planet, your answer to this question will depend on your (subjective) degree of belief in a bunch of other things (how likely the scientists are to have done their calculations correctly, how likely it is that there’s a massive conspiracy to hide the truth, etc.). The fact that Bayesian methods incorporate this sort of subjectivity is not a flaw in the methods; it’s inevitable.

Some people seem to think that you shouldn’t use the precise-sounding mathematical machinery of probability theory in situations involving subjectivity, but that has never made any sense to me: you can’t avoid the subjectivity, so you’re choosing between precise-and-subjective and vague-and-subjective. Go with precise every time.

Engber has a nice discussion of the distinction between “statistical” and “subjective” notions of probability, including some history that I didn’t know. In particular, Poisson seems to have been the first to recognize this distinction in the early 1800s, referring to the two as chance and raison de croire (reason to believe). Some people are queasy about using actual numbers for the second category, as they IPCC is doing, but they shouldn’t be.

Here’s Engber being very very right:

The process of mapping judgments to percentages has two immediate benefits. First, there’s no ambiguity of meaning; politicians and journalists aren’t left to make their own judgments about the state of the science on climate change. Second, a consistent use of terms makes it possible to see the uptick in scientific confidence from one report to the next; since 2001, we’ve gone from “likely” to “very likely,” and from 66 percent to 90 percent.

But I think he goes astray here:

But the new rhetoric of uncertainty has another effect—one that provides less clarity instead of more. By tagging subjective judgments with percent values, the climatologists erode the long-standing distinction between chance and raison de croire. As we read the report, we’re likely to assume that a number represents a degree of statistical certainty, rather than an expert’s confidence in his or her opinion. We’re misled by our traditional understanding of percentages and their scientific meaning.

To the charge that the IPCC is eroding the distinction, I say “Erode, baby, erode!”  Raisons de croire are probabilities, and we shouldn’t be shy about admitting it.

Engber’s final sentence:

However valid its conclusions, the report toys with our intuitions about science—that a number is more precise than a word, that a statistic is more accurate than a belief.

I don’t understand the first objection. A number is more precise than a word. I’m missing the part where this is a bad thing. The last clause is so muddled that I don’t know how to interpret it: a probability (which I think is what he means by “statistic”) is a way of characterizing a belief.

 

UR Observatory enters the 21st century

with a Facebook page and a Twitter feed.

We’ll be posting announcements of public observing nights in both places, so if you want to know when things are going on at the telescope, follow us on Twitter and/or sign up to get notifications on Facebook. (If I’m not mistaken, to do the latter you want to go to the Facebook page and check the “Get Notifications” line under the Like button.)

Tell your friends!

Pinker on “scientism”

Steven Pinker’s got an essay in the New Republic headlined Science Is Not Your Enemy: An impassioned plea to neglected novelists, embattled professors, and tenure-less historians. Pinker is a great science writer — I particularly admire his first general-audience book, The Language Instinct — but the more polemical he gets, the less I like his writing. Although I agree with some of what I think he’s trying to say in this essay, it strikes me as a largely unhelpful contribution to the endless science vs. humanities discussion.

For one thing, although Pinker claims to be arguing for a truce, he seems to go out of his way to say thing to fire up the scientist camp and irritate the humanities camp. He also seems to be arguing against straw men a lot of the time. Massimo Pigliucci does a good job detailing some of these problems.

Pinker seems to be claiming that scientists are themselves the victims of straw-man arguments: when charges of “scientism” (a word used to bash overweening scientists) get leveled, he claims, the word often seems to refer to extreme positions that no scientist actually holds. Sometimes, that’s probably true. In fact, as Sean Carroll suggests, it might be best to retire the word altogether, since its meaning is unclear and it functions more as a scare term than a useful descriptor. But at least some vocal scientists do believe that science can and will extend beyond its traditional boundaries, in ways that other people disagree with. Those claims might be right or they might be wrong, but they are out there.

Take Sam Harris as an example. He wrote a book subtitled “How science can determine human values.”  From Harris’s web page:

In this explosive new book, Sam Harris tears down the wall between scientific facts and human values, arguing that most people are simply mistaken about the relationship between morality and the rest of human knowledge. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible.

Whether or not Harris is right about this (I think he’s wrong), he is making a claim that lots of people find preposterous. Whether or not you like the word “scientism,” you can’t deny that he’s claiming a mandate for science that goes far beyond what many people think correct. It’s simply not the case that the people who accuse Harris of “scientism” are attributing to him extreme positions that he doesn’t hold.

It’s not every day that I find myself agreeing with Ross Douthat, but I think that his view of Pinker’s essay has a lot that’s right:

If this is scientism then obviously no sensible person should have a problem with it. But the “boo-word” version of the phenomenon — the scientism that makes entirely unwarranted claims about what the scientific method can tell us, wraps “is” in the mantle of “ought” and vice versa, and reduces culture to biology at every opportunity — is much easier to pin down than Pinker suggests.

This is an impressively swift march from allowing, grudgingly, that scientific discoveries do not “dictate” values to asserting that they “militate” very strongly in favor of … why, of Steven Pinker’s very own moral worldview! You see, because we do not try witches, we must be utilitarians! Because we know the universe has no purpose, we must imbue it with the purposes of a (non-species-ist) liberal cosmopolitanism! Because of science, we know that modern civilization has no dialectic or destiny … so we must pursue its “unfulfilled promises” and accept its “moral imperatives” instead!

Like Sam Harris, who wrote an entire book claiming that “science” somehow vindicates his preferred form of philosophical utilitarianism (when what he really meant was that if you assume utilitarian goals, science can help you pursue them), Pinker seems to have trouble imagining any reasoning person disagreeing about either the moral necessity of “maximizing human flourishing” or the content of what “flourishing” actually means — even though recent history furnishes plenty of examples and a decent imagination can furnish many more.

One thing I did like about Pinker’s essay: he describes Rocks of Ages as Stephen Jay Gould’s “worst book.” I haven’t read all of Gould’s books, so I suppose I can’t confirm this with 100% certainty, but Gould’s idea of “non-overlapping magisteria,” laid out in this book, is one of his more stupid creations.

The Chronicle of Higher Ed. on UR’s first-year seminar program

The Chronicle of Higher Education ran a piece on the University of Richmond’s first-year seminar program. (Don’t know if it’s paywalled.) I’ve taught FYS several times, and I currently serve on the committee overseeing the program, so I was naturally interested in what they have to say. I certainly urge all of my faculty colleagues to read the piece, especially those of us on the committee, which will be performing a thorough review of the program in the fall.

The piece focuses on three seminars, taught by different faculty members on very different subjects. The reporter sat in on each of the seminars multiple times and gets quite specific about what went on in class. I’d be interested to know what others think, but overall I don’t think we come off very well from these descriptions. I have no way of knowing whether the article’s description of the seminars does them justice, and I shudder to think of how my course might have come out looking under the same treatment, so I don’t mean this statement as a criticism of the three courses or their instructors; it’s just the impression I got as a reader.

In addition to descriptions of the three courses, the article makes some general observations. This struck me as the most provocative section:

One of the chief strengths of seminars is that they serve as pedagogical laboratories for faculty members, says Jennifer R. Keup, director of the National Resource Center for the First-Year Experience and Students in Transition, at the University of South Carolina.

But the variability of seminars, she says, is a weakness.

An analysis produced by the center measured the extent to which teaching practices that are generally accepted as strong were present in different types of seminars. Results suggested that those with varying content had relatively low levels of academic challenge and clarity of teaching, in part because their general quality varied so much, says Ms. Keup.

“When content is left up to the instructor, you cede a lot of control,” she says, which “makes it harder to engage and police the degree to which they’re doing them well.”

There’s no surer way to enrage faculty members than to suggest that it would be better to take control of the content of their courses away from them. And in fact, I think that the variety of course topics is a great strength of our program, particularly in comparison with its predecessor, a “core course” taught in many sections, by faculty members from many disciplines, with a common syllabus. (I wrote down some thoughts on this years ago, when we were considering making this transition. In short, I thought — and continue to think — that the old core course, in which instructors taught subjects in which they had no expertise, was something akin to educational malpractice.)

I certainly don’t deny that the problem of nonuniform levels of rigor is real. When we first adopted the first-year seminar program, people worried a lot about whether the seminars would set uniformly rigorous standards. The committee I currently serve on is tasked, among other things, with figuring out ways to assure that they do, but it’s a hard problem, and I don’t think anybody knows how best to do it.

I don’t know any details of the study referred to in the above quote, but I have doubts about whether there’s necessarily a causal link between  having courses with varying content and having courses with low / inconsistent standards. Anecdotally (for what little that’s worth), back in the old days of our core course, there was a great deal of variation in the standards set by different core instructors, although there’s no way of knowing whether it was greater than the variability in the current seminars.

I do worry that, in the first-year seminar program as it currently exists, we’re not doing enough to set uniformly high expectations, but I hope that we can figure out ways to do that while maintaining the program’s strengths — such as faculty teaching subjects in which they have expertise, and students choosing courses in subjects that they find interesting.

All-male physics departments are not necessarily more biased than other departments

I learned via my former student Brent Follin about a study performed by the American Institute of Physics on the distribution of women on the faculty of US physics departments. Inside Higher Ed has a summary.

The specific question they were looking at is whether physics departments whose faculty are 100% male are discriminating against women. From Inside Higher Ed:

More than one-third of physics departments in the United States lack a single female faculty member. That figure has been cited by some as evidence of discrimination. With women making up just 13 percent of the faculty members (assistant through full professors) in physics, could there be another explanation?

The authors, being physicists, answer this with a simulation. They show that, given that the overall percentage of physics faculty members is so low, the number of all-male departments is not surprising. To be specific, if you start with a pool of candidates that’s 13% female, and distribute them at random among physics departments (many of which have a pretty small number of faculty members), you’d get a distribution pretty similar to the actual one. In fact, you’d actually get more all-male departments under the random-distribution scenario.

Bizarrely, the original study says

Our results suggest that there is no bias against hiring women in the system as a whole.

This sentence strikes me as nearly the exact opposite of an accurate description. Here are two things that I think they could say:

  1. Our results do not show evidence either for or against the hypothesis of bias against hiring women in the system as a whole, because the study was not designed to answer this question.
  2. Our results show no evidence that all-male departments are more biased against hiring women than other departments.

Maybe it’s obvious what I mean here, but let me blather on a bit anyway.

The results show that, given the low percentage of women in the overall pool of people hired, you’d expect about as many all-male departments as there actually are (or even more). It doesn’t say anything about why the pool has so few women in the first place. That could be due to bias, or it could be due to other reasons: women might self-select out of careers in academic physics before they get to the faculty level, for instance, or women might be intrinsically less qualified than men (the latter is the “Larry Summers hypothesis,” which I emphatically do not endorse but which is a logical possibility [UPDATE: Phillip Helbig points out correctly that this is a mischaracterization of Summers. See his comment below.]). Since the study, by design, has nothing to say about why the pool skews so heavily male, it’s ridiculous to suggest that it supports any conclusion about whether there is bias in the “system as a whole.”

Here’s what the study does suggest: whatever factors are causing the overall pool to be heavily male, those factors are more or less evenly distributed across departments. If, for instance, some departments were heavily biased while others were strongly egalitarian, you’d expect women to be clustered in some departments and absent from others. The study shows that that’s not the case. If anything, the gender distribution is less clustered than you’d get by chance.

Despite my complaints, that’s an interesting fact to know. If you believe (as I do) that we could and should be doing better at getting women into physics and related fields, it makes a difference whether the problem lies in a few “bad apple” departments or is more evenly distributed. This data set suggests that it’s evenly distributed.

B modes detected!

A paper just came out describing the detection of B modes in the cosmic microwave background polarization. This is a great example of how specialized and technical science can get: to someone like me, who works on this stuff, it’s a big deal, but it takes quite a bit of effort to explain what those words even mean to a non-specialist. Peter Coles has a description. Here’s mine.

Let’s start with the cosmic microwave background, which is the oldest light in the Universe. It consists of microwaves that have been traveling through space ever since the Universe was about 1/30,000 of its present age. (If it’s made of microwaves, why did I call it light? Because microwaves are light! They’re just light with wavelengths that are too long for us to see.)

Maps of the varying intensity of this radiation across the sky give us information about what the Universe was like when it was very young. People have been studying these maps in detail for over 20 years, and were trying hard to make them for decades before that. For many years, studies of the microwave background focused on making and analyzing maps of the variations in intensity, but in recent years a lot of effort has gone into trying to map the polarization of the radiation.

Microwave radiation (like all light) consists of waves of electric and magnetic fields. Those fields can be oriented in different ways — for instance, a light wave coming right at you could have its electric field wiggling up and down or left and right. We say that this wave can be either vertically or horizontally polarized. Radiation can be unpolarized, meaning that the direction of the wiggles shifts around randomly among all the possibilities, but people predicted a long time ago that the microwave background radiation should be weakly linearly polarized, so that, in some parts of the sky, the electric fields tend to be oriented in one direction, while in other areas they tend to be oriented another way.  Lots of effort these days is going into detecting this polarization and mapping how it varies across the sky.

These polarization maps provide additional information about conditions in the young Universe, going beyond what we’ve gotten out of the intensity maps. In fact, if we can tease out just the right details from a polarization map, we can learn about entirely new phenomena that might have taken place when the Universe was extremely young. In particular, some varieties of the theory known as cosmic inflation predict that, when the Universe was very young, it was filled with gravitational waves (ripples in space). These waves would leave a characteristic imprint in the microwave background polarization. If you saw that imprint, it’d be totally amazing.

The problem is that the really amazing things you’d like to see in the polarization maps are all very very faint — much fainter than the “ordinary” stuff we expect to see. The “ordinary” stuff is still pretty cool, by the way — it’s just not as cool as the other stuff. Fortunately, we have one more trick we can use to see the exotic stuff mixed in with the ordinary stuff. There are two kinds of patterns that can appear in a polarization map, called E modes and B modes (for no particularly good reason). The E modes have a sort of mirror-reflection symmetry, and the B modes don’t. A lot of the “ordinary” polarization information shows up only in the E modes, so if you can measure the B modes, you have a better chance of measuring the exotic stuff.

That’s why people have spent a long time trying to figure out how to (a) make maps of the polarization of the microwave background radiation and (b) extract the “B modes” from those maps. The new paper announces that that goal has been accomplished for the first time.

This detection does not show anything extraordinary, like gravitational waves crashing around in the early Universe. The B modes they found matched what you’d expect to find based on what we already knew. But it shows that we can do the sorts of things we’ll need to do if we want to search for the exotica.

The fact that this experiment found “only” what we expected may sound kind of uninteresting, but that’s just because we’ve gotten a bit jaded by our success up to this point. The signal that these people found is the result of photons, produced in the early Universe, following paths through space that are distorted by the gravitational pull of all the matter in their vicinity. The fact that we can map the Universe well enough to know what distortions to expect, and then go out and see those distortions, is amazing. It’s another bit of evidence that we really do understand the structure and behavior of the Universe over incredibly large scales of distance and time.

Incidentally, the lead author on this paper, Duncan Hanson, is a former collaborator of mine. When he was an undergraduate, he and I and Douglas Scott coauthored a paper. It’s great to see all the terrific work he’s done since then.

(Just to avoid any misunderstanding, as much as I would like to, I can’t claim Duncan for the University of Richmond. Although he and I worked together when he was an undergraduate, he wasn’t an undergraduate here. He was at the University of British Columbia.)

Predictions are hard, especially about the future

Although I’m geeky in a bunch of different ways, I’ve never been a big science fiction reader. Lately, though, I’ve been trying out some of the classics of the genre. I just finished Asimov’s Foundation trilogy, which I enjoyed quite a bit. I think it gets better as it goes along: things get more morally ambiguous and interesting in the last part.

It’s hard (at least for me) to read futuristic stories without evaluating them for accuracy or plausibility. That’s not necessarily a fair form of criticism: the author’s goal is to tell a good story, regardless of whether it’s accurate. But it’s an irresistible game to play, at least for me.

In that spirit, this description of a new navigation system, just implemented in the starships of 25,000 years in the future, struck me as amusing:

Bail Channis sat at the control panel of the Lens and felt again the involuntary surge of near-worship at the contemplation of it. He was not a Foundation man and the interplay of forces at the twist of a knob or the breaking of a contact was not second nature to him. Not that the Lens ought quite to bore even a Foundation man. Within its unbelievably compact body were enough electronic circuits to pinpoint accurately a hundred million separate stars in exact relationship to each other. And as if that were not a feat in itself, it was further capable of translating any given portion of the Galactic Field along any of the three spatial axes or to rotate any portion of the Field about a center.

So this amazing device can manipulate a few gigabytes of data, applying translations and 3d rotations to it? That’d be a difficult job for your phone, but utterly trivial for, say, your XBox, let alone the high-end computers of today. The Foundation has nuclear power plants you can carry around in your pocket, but their computing expertise is stuck in the 20th century.

Asimov goes on to describe how a human operator can use this to figure out where he is in the Galaxy, by looking at the patterns of stars and matching them up by eye, in “less than half an hour.” Of course, this would, even today, be trivial to do in software in a fraction of a second.

(In fairness, I should mention the possibility that “million” is a misprint for “billion.” The number of stars in the Galaxy is of order 100 billion, as Asimov surely knew. If you make that switch, it becomes a more daunting task, although still perfectly manageable with good present-day computers.)

Again, I don’t mean this as a serious criticism of Asimov. I just thought it was amusing.

If you really want to criticize Asimov for mis-guessing about the future, you’ll find much more fertile ground in his depictions of how people behave. Asimov’s characters act an awful lot like men (pretty much always men) of the mid-20th century. They spend an astonishing amount of time reading newspapers and offering each other cigars, for instance.

One more thing. According to Wikipedia,

In 1965, the Foundation Trilogy beat several other science fiction and fantasy series (including The Lord of the Rings by J. R. R. Tolkien) to receive a special Hugo Award for “Best All-Time Series.” It is still the only series so honored. Asimov himself wrote that he assumed the one-time award had been created to honor The Lord of the Rings, and he was amazed when his work won.

I’ll reveal my geek allegiance: Asimov was right to be amazed. The Foundation books are good, but the idea of them winning in a head-to-head competition with Lord of the Rings is absurd.

Anyone speak Ukrainian?

About 17 years ago, I wrote an explanation of some stuff about black holes. To my astonishment, people still read it from time to time.

Recently I got an email from someone asking if he could translate it into Ukrainian. I said yes, and here’s the result.

Not speaking Ukrainian, I can’t vouch for the translation’s accuracy. Google Translate shows it to have a bunch of the right words in the right places, but it also has things like this:

 Beyond the horizon, the escape velocity less than the speed of light, so if we fire your rockets hard enough, we can give them, and enough energy to pita bread.

There are two different ways prinaymni viznachiti how something is bicycle.

The sun, for example, has a radius of about 700,000 km, and so, that nadmasivna black hole has a radius of only about chotiroh times Buy it, than sun.

Let A valid, that we otrimuyete in thy spaceship and send directly to the ego million solar masses black hole at the center of our galaxy. (In fact, there is some debate about the our galaxy provides a central black hole Admissible Ale, it does a lot of Denmark at the time.) Since far from the black hole, we just vimknit rockets and coast in. What’s going on?

Chi is not a black hole if viparuyetsya under me, woo something I achieve?

Anyway, if you happen to speak Ukrainian, check it out.

 

Before asking why, ask whether

For some reason, I can’t resist hate-listening to Frank Deford’s sports commentaries on NPR’s morning edition, despite the facts that (a) I’m not much of a sports fan and (b) he has an  incredibly irritating, self-important style, intended primarily to convince the listener of his own erudition.

(Incidentally, Deford is going to share a stage with our own University of Richmond President, Ed Ayers, as a recipient of a National Endowment for the Humanities Medal from President Obama. Congratulations, President Ayers!)

From his latest:

More recently, we’ve tended to excuse the virtual cavalcade of criminal actions committed by players away from the gridiron. Why, 29 NFL players have been arrested just since the Super Bowl … We need to somehow clean the Aegean stables of the stink of violence.

(Speaking of pointless erudition, and leaving myself open to pot-kettle comparisons, let me note that I suspect that an editor, rather than Deford himself, is to blame for writing “Aegean” instead of “Augean.” Deford pronounces the word with a clear “Au” sound.)

Deford starts from the premise that NFL players are thugs and criminals, based on this evidence, but he never asks the obvious question: Is 29 a large number of arrests? There are, after all, a lot of NFL players.

The answer, it turns out, is no. NFL players get arrested about half as often as the US population as a whole. If you compare to men of similar ages to NFL players, instead of to the population as a whole, the difference is much greater: NFL players are arrested about 1/5 as often as their peers. The BBC programme More Or Less has the details.

In short, we seem to be in the middle of a moral panic over NFL players’ thuggery at the moment, despite the fact that the players are disproportionately law-abiding. Odd.

Of course, this comparison isn’t perfect. For one thing, NFL players are all rich (once they’ve got their contracts — not necessarily before that, of course), and crime tends to be concentrated among the poor. If you compare arrest rates for NFL players to arrest rates for people with similar incomes, presumably you’d get a different answer.

Also, the data used by More Or Less concerns all arrests. Since the chief concern seems to be whether NFL players are disproportionately violent, it might be more relevant to look at just arrests for violent crimes. Let’s give it a try.

Combining FBI and census data, it looks like about 0.5% of young men (20-35) are arrested for a violent crime per year. There are a total of over 1600 NFL players, so if NFL players are typical of their age cohort, you’d expect about 8 players per year to be arrested for a violent crime. According to the San Diego Union-Tribune’s NFL Arrests Database, the actual number is quite close to this. I can’t quite tell exactly, because I don’t have the legal expertise to know exactly which arrests are “violent crimes” according to the FBI’s definition, but looking at the past year and counting borderline cases, I can’t make the number come out above 10.

So there’s certainly not significant evidence that NFL players are more violent than their peers based on this data set.

This is not, of course, to excuse anybody’s bad behavior. In particular, the largest category of arrests (both for NFL players and young men as a whole) are for drunk driving, which is not counted as a violent crime by the FBI but is still an appalling thing to do. But when considering what to do about problems like this, it’s better to do it on the basis of actual data rather than deciding that a certain population is disproportionately violent based on confirmation bias.