Archive for September, 2013

Steven Pinker initiates a death spiral of stupidity

Saturday, September 14th, 2013

I didn’t think much of Steven Pinker’s New Republic essay on “scientism” (i.e., the tired old science-vs-humanities squabble), but it did initiate an stream of amusingly stupid responses.

First came Leon Wieseltier’s response, also in the New Republic, which was a stellar example of missing the point. Although he claims to be arguing against Pinker, he spends most of his time railing against a position that Pinker very explicitly disavows (namely that scientific understanding is the only kind of understanding worth having, or that the sciences can and should subsume the humanities). By far the most interesting question raised by Wieseltier’s piece is the question of whether Wieseltier can’t understand what he reads or is deliberately misrepresenting Pinker.

Wieseltier also wins stupidity points for repeatedly using the absurd word “scientizer.”

But by far the most amusingly silly response is Daniel Dennett’s defense of Pinker. Dennett  thinks that Wieseltier’s arguments are too silly to deserve attention, so he devotes himself to sarcasm. I have to admit that I liked this line,

Pomposity can be amusing, but pomposity sitting like an oversized hat on top of fear is hilarious.

but (a) this sort of thing is clearly of precisely no value in convincing anyone of anything they don’t already believe, and (b) when it comes to pomposity (although perhaps not fear), Dennett’s at least a bit vulnerable to a pot-kettle problem.

There’s just one thing in this whole business that I think is actually a bit interesting. On the subject of science vs. religion, Wieseltier says

Pinker tiresomely rehearses the familiar triumphalism of science over religion: “the findings of science entail that the belief systems of all the world’s traditional religions and cultures … are factually mistaken.” So they are, there on the page; but most of the belief systems of all the world’s traditional religions and cultures have evolved in their factual understandings by means of intellectually responsible exegesis that takes the progress of science into account; and most of the belief systems of all the world’s traditional religions and cultures are not primarily traditions of fact but traditions of value; and the relationship of fact to value in those traditions is complicated enough to enable the values often to survive the facts, as they do also in Aeschylus and Plato and Ovid and Dante and Montaigne and Shakespeare.

Wieseltier has one valid and important point here. The more strident anti-religion types love to argue as if all religious people were knuckle-dragging fundamentalists who think the Earth is 6000 years old. This is simply not the case, and by pretending it is such people are attacking a straw man. But Wiesltier himself shades the truth in the opposite direction when he says

Most of the belief systems of all the world’s traditional religions and cultures are not primarily traditions of fact but traditions of value

I’m no expert on the sociology of religion, but I’m confident that this statement is not true, at least not in any sense that’s useful for thinking about the relationship between science and religion in contemporary America.

As Wieseltier is no doubt aware, there are lots and lots of people in the world for whom statements of fact, such as “A particular man was born of a virgin, died, and later rose from the dead” are extremely important parts of their religious tradition. Following Stephen Jay Gould’s infamous idea of non-overlapping magisteria, Wieseltier simply defines religious tradition in a way that does not include such people.

There are indeed many religious people for whom questions of value and meaning, rather than questions of fact, are the only things that matter about their religion. I know quite a few of them, and I wouldn’t be surprised if the vast majority of the religious people in Wieseltier’s social circle are in this category. But there are many people — at a guess, and without any data, I’d say far more people, in the US at any rate –for whom this isn’t true. For every Christian who says that it’s not important whether the Resurrection actually happened (I know of at least one Episcopal priest who says this), I bet there are a whole bunch who say that anyone who thinks that way isn’t really a Christian.

I can understand the intellectual appeal of defining the problems of science and religion out of existence, but if you’re interested in understanding how the two cultures actually interact in present-day society, this solution won’t do.

 

Parallax

Friday, September 13th, 2013

I like to brag about my students, so let me point out that, when you search YouTube for “parallax,” the number one hit is a video made by UR student Eric Loucks as a part of my first-year seminar course, Space is Big.

Great job, Eric!

 

 

Either Scientific American or I don’t understand the word “theory”

Sunday, September 8th, 2013

Scientific American has an article about 7 Misused Science Words. Number 2 is “theory”:

Part of the problem is that the word “theory” means something very different in lay language than it does in science: A scientific theory is an explanation of some aspect of the natural world that has been substantiated through repeated experiments or testing. But to the average Jane or Joe, a theory is just an idea that lives in someone’s head, rather than an explanation rooted in experiment and testing.

Although of course I applaud the broader point they’re making — saying something is “just a theory,” as, e.g., anti-evolution types do, isn’t an argument against its validity — this doesn’t sound right to me. A theory may have been experimentally substantiated, but it need not have been.

Is string theory (which is notoriously untested by experiment) not a theory? Was general relativity not a theory during the several decades during which it had minimal experimental support?

The article supports this definition with a link to a post at something called Livescience, which says (in its entirety)

A scientific theory summarizes a hypothesis or group of hypotheses that have been supported with repeated testing. If enough evidence accumulates to support a hypothesis, it moves to the next step—known as a theory—in the scientific method and becomes accepted as a valid explanation of a phenomenon.

In my experience, this is not how scientists use the word. I know lots of physicists who come up with theories willy-nilly, and don’t feel the need to wait for experimental evidence before labeling them “theories.”

In the unlikely event that any creationists read this, let me reiterate: I am not saying that a theory necessarily lacks experimental support, so saying something is “just a theory” doesn’t constitute a logical argument against it. In particular, Darwinian evolution is a theory, which happens to be buttressed by phenomenal amounts of evidence.

Granted, this is pretty much just a quibble. I’m just easily irritated by cartoon descriptions of “the scientific method,” formed without paying much attention to what scientists actually do and then glibly repeated by scientists.

Politics makes you stupid, but only if you’re smart

Friday, September 6th, 2013

Mother Jones has a writeup of a very interesting psychology study on how political views affect people’s ability to draw mathematical conclusions.

A group of psychologists gave a bunch of people a math problem:

Lots of people get this wrong. (That’s not the surprising part.)

Here’s where things get more interesting. They changed the problem into one about gun control: leaving the numbers exactly the same, they posed the problem as one about whether cities that enact gun control laws saw an increase or a decrease in crime. They also surveyed the participants to determine (a) their political views and (b) some sort of measure of their numeracy. In both the skin-rash and gun-control cases, they had two versions of the question, in which all that was switched was the right answer — that is, they interchanged the labels “Rash got better” and “Rash got worse,” leaving all the numbers unchanged.

The results:

 

The top graph is for the skin-rash version of the study. The bottom rash is the gun-control version. In time-honored fashion, red is conservative and blue is liberal.

No big surprises in the skin-rash results (except possibly the suggestion that the curves rise a bit at very low numeracy, but I don’t know how significant that is). The action is all in the lower graph, which indicates the following:

  1. Both liberals and conservatives are more likely to get the answer right when it accords with their political preconceptions.
  2. For liberals, there’s a large difference only among the highly numerate — the innumerate do equally badly whichever political valence they see.

Fact 1 is interesting but not astonishing. Fact 2 is the really fascinating one.

The proposed explanation:

Our first instinct, in all versions of the study, is to leap instinctively to the wrong conclusion. If you just compare which number is bigger in the first column, for instance, you’ll be quickly led astray. But more numerate people, when they sense an apparently wrong answer that offends their political sensibilities, are both motivated and equipped to dig deeper, think harder, and even start performing some calculations—which in this case would have led to a more accurate response.

“If the wrong answer is contrary to their ideological positions, we hypothesize that that is going to create the incentive to scrutinize that information and figure out another way to understand it,” says Kahan. In other words, more numerate people perform better when identifying study results that support their views—but may have a big blind spot when it comes to identifying results that undermine those views.

What’s happening when highly numerate liberals and conservatives actually get it wrong? Either they’re intuiting an incorrect answer that is politically convenient and feels right to them, leading them to inquire no further—or else they’re stopping to calculate the correct answer, but then refusing to accept it and coming up with some elaborate reason why 1 + 1 doesn’t equal 2 in this particular instance. (Kahan suspects it’s mostly the former, rather than the latter.)

The Mother Jones article suggests that “both liberals and conservatives” show effect number 2 — highly numerate people being more biased than less numerate people — but from the graph the effect seems much smaller for the conservatives. It seems to me that the liberals are the ones whose behavior needs explaining.

With that caveat, this explanation may well be right, but I wonder if there might be other effects. Maybe highly numerate liberals differ from less numerate liberals in some other way, such as more strongly-held political views. I’m not necessarily espousing that speicfic explanation; I’m just wondering. I’d be interested to hear other ideas.

Left-handed people don’t die young

Sunday, September 1st, 2013

The New Yorker has a rundown of the studies looking for links between left-handedness and various other traits. Contrary to beliefs of 100 years ago, left-handed people aren’t more likely to be criminals or schizophrenic, but we do do better, on average, in certain kinds of cognitive tests. So there.

A long time ago, I used to hear about how, as a left-hander, I should expect to die young and poor. As the New Yorker piece points out, that was debunked a long time ago. Studies showing that left-handers die younger than right-handers reached that conclusion due to a problem well known to astrophysicists: selection bias.

Some stars are giant stars, and some stars aren’t. Suppose that you tried to figure out what percentage of stars were giants. If you do a survey of the nearest stars, you get one answer, but if you do a survey of very distant stars, you get a different answer: in the latter case, you find a higher percentage of giants. You might conclude that something in our local environment stops giant stars from forming. But in fact there’s a different explanation: it’s easier to spot giant stars than small stars. When we’re looking nearby, we see all the stars, but when we’re looking far away, we miss some, and the ones we miss tend not to be giants.

The same thing happened with studies of left-handed people. When we look at young people, we find all the left-handers, but when we look at old people, we miss some, because in the past left-handed people used to be “converted” to right-handedness. (This happened to my uncle, for instance.) So when you look at a sample of old people, you think that a bunch of lefties have gone missing. If you didn’t take into account the fact that lefties used to be converted, you’d think that that meant that lefties die young.