I thought that Canadians were supposed to be the reasonable ones

I haven’t been able to work up much umbrage over the proposed move by  Republican Congressman Lamar Smith to change the way the National Science Foundation grants are awarded. I think that the proposed changes are a bad idea, but would have much less practical effect than some people claim.

Via Phil Plait, I see that the Canadian government seems to be actually doing what people hyperbolically claim the Smith bill would do:

The government of Canada believes there is a place for curiosity-driven, fundamental scientific research, but the National Research Council is not that place.

“Scientific discovery is not valuable unless it has commercial value,” John McDougall, president of the NRC, said in announcing the shift in the NRC’s research focus away from discovery science solely to research the government deems “commercially viable”.

I don’t know enough about Canadian science, or Canadian government workings in general, to be sure, but this sounds exactly like what Lamar Smith wants to do in the US. The difference is that it appears to be actually happening in Canada, whereas even if Smith’s bill were to pass, I don’t think it would have the effect he’s aiming for.

Perhaps the most shocking thing about this is that it’s being done with the approval of the head of the National Research Council, who actually said

“Scientific discovery is not valuable unless it has commercial value.”

It’s easy to imagine this sentence coming out of the mouth of a member of the US Congress,  but not from the head of, say, the National Science Foundation.

Phil Plait on why this is wrongheaded:

This is monumentally backwards thinking. That is not the reason we do science. Economic benefits are results of doing research, but should not be the reason we do it. Basic scientific research is a vast endeavor, and some of it will pay off economically, and some won’t. In almost every case, you cannot know in advance which will do which.

In the 19th century, for example, James Clerk Maxwell was just interested in understanding electricity and magnetism. He didn’t do it for monetary benefit, to support a business, or to maximize a profit. Yet his research led to the foundation of our entire economy today. Computers, the Internet, communication, satellites, everything you plug in or that uses a battery, stem from the work he did simply because of his own curiosity. I strongly suspect that if he were to apply to the NRC for funding under this new regime, he’d be turned down flat. The kind of work Maxwell did then is very difficult to do without support these days, and we need governments to provide that help.

 

 

The New Yorker spread the “Lamar Smith wants to kill peer review” rumor

The New Yorker has a piece up about how Lamar Smith’s new bill will get rid of peer review at the National Science Foundation. As far as I can tell, the bill would do no such thing.

Currently, proposals are evaluated through a traditional peer-review process, in which scientists and experts with knowledge of the relevant fields evaluate the projects’ “intellectual merits” and “broader impacts.” Peer review is a central tenet of modern academic science, and, according to critics, the new bill threatens to supersede it with politics.

This paragraph would be fine, if it were followed by a clear statement that the last sentence, while it may be true “according to critics” is not, you know, actually true. There’s nothing in the text of the bill that can reasonably be described in this way.

If I were full of nostalgia for the glory days of the New Yorker, I’d say something at this point about William Shawn spinning in his grave, but I’m not, so I won’t.

(Just to shore up my anti-Republican bona fides, let me repeat some things from my earlier post. Despite the fact that this bill is being mischaracterized by its critics, it’s still a bad idea, and Lamar Smith and many of his fellow Congressional Republicans are indeed Enemies of Science.)

That Creationist science quiz

Phil Plait has a pretty good jeremiad on the infamous Creationist 4th-grade science quiz:

The thing that gets me is not the issue of legality here, nor necessarily who is promoting it. What really makes my heart sink is the reality that this is actually being taught to young children. Kids are natural scientists; they want to see and explore and categorize and ask “why?” until they understand everything. And we, as adults, as caretakers, have a solemn responsibility to nurture that impulse and to answer them in as honest a way as possible, encouraging them to seek more answers—and more questions—themselves. That’s how we learn.

But this? This isn’t learning. It’s indoctrination. It’s the exact opposite of inquisitiveness: it’s children being told what the creationists want the answer to be, despite the evidence. And it’s not just that these children are being told something that’s wrong; it’s that they are also told to simply accept it and deny the actual evidence they come across.

If you haven’t seen the quiz, you can go to Snopes, among other places, for details. Snopes is of course the cataloguer and debunker of urban legends. They have a piece about this because, when the quiz first started circulating, lots of people thought it was a hoax. It always seemed utterly plausible to me that it was real, unfortunately.

In addition to lamenting, Plait provides some links to useful resources about evolution and creationism, including one from Nature called 15 Evolutionary Gems, which I hadn’t seen before and which looks good.

On a related note, Eugenie Scott, longtime head of the National Center for Science Education, is retiring. Plait again:

Genie has been a tireless fighter against nonsense; for years as head of NCSE she kept up with and kept after people who try to wedge religious indoctrination into schools. Whether it was creationism or its mutant offspring Intelligent Design, she was always there, in the courtroom or on stage or writing about it. The NCSE is a shining example of what needs to be done in this never-ending affair.

That reminds me that I haven’t sent any money to the NCSE for a while. I’m going to do that. You should too.

Does Lamar Smith really want to get rid of peer review?

Most of the time, when someone gives a post a title that’s a question, they have an answer in mind that they’re trying to convince you of. This one is a sincere question, to which I don’t know the answer.

Science bloggers are erupting at the moment over a draft bill proposed by Lamar Smith about National Science Foundation funding. Here’s Phil Plait, for example:

To start, Rep. Lamar Smith (R-Texas), who is a global warming denier, by the way, is the head of the House Committee on Science, Space, and Technology. He has recently decided that the National Science Foundation—a globally respected agency of scientific research and investigation—should no longer use peer review to fund grants. Instead it should essentially get political permission for which research to fund.

(Incidentally, if you’re not reading Plait’s Bad Astronomy blog, you should be.)

This story, sadly, is extremely credible. Many Congressional Republicans, very definitely including Smith, are pushing an anti-science agenda and would love to take science funding decisions out of the hands of scientists. But when I went looking for the actual language in which Smith advocates removing peer review, I couldn’t find it.

The journal Science‘s piece on this would seem like a good place to go, especially as it links to a draft of Smith’s proposed legislation. The legislation would require the NSF director to certify that all grants are

1) “… in the interests of the United States to advance the national health, prosperity, or welfare, and to secure the national defense by promoting the progress of science;

2) “… the finest quality, is groundbreaking, and answers questions or solves problems that are of utmost importance to society at large; and

3) “… not duplicative of other research projects being funded by the Foundation or other Federal science agencies.”

I can’t see anything in here to justify the claim that the legislation goes after peer review. If the legislation passed, the NSF director would presumably have to change the criteria given to the peer reviewers, not eliminate peer review or even alter it all that much.

That’s not to say that the legislation is a good idea — it definitely isn’t — but the response to it seems quite overblown. The first two criteria actually seem mostly harmless: I don’t think that the NSF director would have any trouble certifying that funded projects “advance the national … welfare … by promoting the progress of science” or that the various superlatives in item 2 apply. NSF could rephrase the current “intellectual merit” criterion to include both of those without changing much of anything about the review process. The “not duplicative” language in item 3 is arguably the worst part — some duplication is good — but I suspect that very few purely (or even mostly) duplicative grants get funded anyway, so as long as there’s a bit of leeway in interpreting the language I don’t think that this would make much difference either.

I’ve looked at a bunch of other pieces on the subject, and they generally repeat the “attacking peer review” claim without citing specific evidence. Hence the question in the title, to which I would sincerely like to know the answer.

I feel a bit bad about even raising the question, as it feels like I’m giving aid and comfort to the enemy. And I use the term “enemy” advisedly: there is a strong anti-science contingent in the US Congress, housed almost entirely in one political party, who do want to do very damaging things. But I’d still like to know.

I’ll give xkcd the last word:

Beliefs

Conspiracy theorists may be crazy, but they’re not irrational

Scientific American has a piece about a study from a year or two ago about the belief systems of conspiracy theorists. It has some good information in it, but one aspect of it is misleading, in the same way as a piece I wrote about last year on the same subject.

For example, while it has been known for some time that people who believe in one conspiracy theory are also likely to believe in other conspiracy theories, we would expect contradictory conspiracy theories to be negatively correlated. Yet, this is not what psychologists Micheal Wood, Karen Douglas and Robbie Suton found in a recent study. Instead, the research team, based at the University of Kent in England, found that many participants believed in contradictory conspiracy theories. For example, the conspiracy-belief that Osama Bin Laden is still alive was positively correlated with the conspiracy-belief that he was already dead before the military raid took place. This makes little sense, logically: Bin Laden cannot be both dead and alive at the same time.

That’s a misleading description of what the study actually found. The actual finding was that people who believed that Bin Laden might still be alive are more likely to believe that he might have been dead before the raid.

Those beliefs are, in my opinion, stupid and wrong, but they’re perfectly consistent with each other. Even if statements A and B contradict each other, there’s no contradiction between the statements “The probability of A is significantly different from zero” and “The probability of B is significantly different from zero.” (Unless “significantly different from zero” means “greater than 50%, but there’s no indication in the study that that’s how participants thought of it.)

With this in mind, I don’t think that the positive correlation is the least bit surprising. Both beliefs presumably spring from the same source: a belief that the standard mainstream view is wrong. If that’s your starting point, then it’s only natural that your assessment of the probability of a bunch of different non-standard stories would go up.

As the authors of the original study put it,

The monological nature of conspiracy belief appears to be driven not by conspiracy theories directly supporting one another but by broader beliefs supporting conspiracy theories in general.

 

It’s called the Higgs boson. Get used to it.

Apparently some physicists are arguing that the Higgs boson shouldn’t be called the Higgs boson:

“I have always thought that the name was not a proper one,” said professor Carl Hagen, in an interview with BBC News.

“To single out one individual marginalizes the contribution of others involved in the work. Although I did not start this campaign to change the name, I welcome it.”

According to the BBC, key contributions to Higgs theory have been made by Francois Englert, Peter Higgs, Gerald Guralnik, Tom Kibble, Robert Brout and Carl Hagen, five of whom spoke at a press conference last summer to announce the discovery of what was thought to be the Higgs boson.

Only professor Higgs received a huge round of applause from the audience.

 

It’s true, I gather, that a bunch of people came up with the theoretical ideas leading to the prediction of the Higgs boson, so I suppose it is unfair that Higgs gets the particle named after him, but there’s really not much to be done about it. It’s been called the Higgs for an awfully long time, and I don’t see any way it’s going to change.

This reminds me of Stigler’s Law of Eponymy:

In its simplest and strongest form it says: “No scientific discovery is named after its original discoverer.” Stigler named the sociologist Robert K. Merton as the discoverer of “Stigler’s law”, consciously making “Stigler’s law” exemplify itself.

The Higgs boson actually isn’t a great example of Stigler’s Law, because nobody disputes that Higgs is one of the people behind the particle. A better example is Gresham’s Law in economics, which turns out to have been stated by none other than Copernicus 40 years before Gresham got to it.

There are a bunch of other examples. To cite just a couple,

Just look at Leonhard Euler and Carl Friedrich Gauss, indisputably two of the most important mathematicians of the 18th and early 19th centuries. And yet Euler’s number (better known as the constant e) was actually discovered by Jacob Bernouli, Euler’s formula was more or less demonstrated by Roger Cotes three decades before Euler, Gauss’s Theorem was discovered by Joseph Louis Lagrange and first proved by Mikhail Vasilievich Ostrogradsky, and Gaussian distribution was introduced by Abraham de Moivre 61 years before Gauss popularized it. Euler and Gauss were unarguably great mathematicians, but going by everything named after them you’d think they were the only mathematicians from 1700 to 1850.

To tell the truth, I have an ulterior motive for posting this. On several past occasions I’ve tried to remember the name of Stigler’s Law and been unable to come up with it. Now I’ll always have a place I can go look it up.

Meeting at Ohio State

I just got back from a great workshop at Ohio State University, organized by my collaborator Paul Sutter, on Innovative Techniques in 21-centimeter Analysis. This was a very small (~25 people), tightly-focused meeting. I like these a lot more than the gigantic meetings I sometimes go to. The topic was an area in which I haven’t done any real work yet, but I hope to soon, so it was extremely helpful to get up to speed on the state of the art.

The meeting was about measuring the 21-centimeter radio waves from hydrogen at very high redshifts, in order to map out the distribution of matter at times much earlier than the present (but much later than the cosmic microwave background radiation, which is the main thing I study). Cold hydrogen atoms like to emit and absorb radiation at the specific wavelength of 21 centimeters. This radiation, like all radiation, is shifted to longer wavelengths by the expansion of the universe. This redshift is greater at greater distances, so by observing this radiation at different wavelengths, you can map out the distribution of stuff in the universe in three dimensions.

At least, that’s the idea. The measurements are incredibly hard, because the signal is incredibly faint: it’s 1000 or more times fainter than other, more local sources of radio waves, so separating the signal you want to see from all the other stuff is a big challenge.

One of the speakers was Jeff Zheng, University of Richmond class of 2011, who did some great work in my research group when he was an undergraduate. He’s now a second-year graduate student at MIT. Here’s what he talked about:

The Omniscope: developing scalable technology for precision cosmology

Jeff Zheng

I describe the design and current status of the Omniscope, a 21 cm interferometer architecture optimized for scalability to very large (10^4-10^6) numbers of antennas N. By exploiting a hierarchical antenna grid layout, the correlator cost scales as N log N rather than N^2, and massive baseline redundancy enables automatic calibration and identification of bad data and failed components.

I’m pretty sure he was the most inexperienced speaker there, but you’d never know it from his talk, which was excellent. It’s great to see one of our graduates out there doing such top-notch work.

Matt Yglesias, Bayesian

Matt Yglesias uses the Reinhart-Rogoff economics fiasco as a springboard to talk about When to Take Empirical Evidence Seriously:

So when you look at a purported empirical finding you need to ask not only how strong is the evidence, but what’s a reasonable prior assessment before looking at the new empirical data. A correlation is never sufficient to establish a causal relationship, but it might be good evidence for the existence of one or else it might not. That depends, in part, on how strong your theory is.

Basically, he’s advocating the position, often attributed to Sir Arthur Eddington, that you should never believe an experiment until it’s been confirmed by a theory. That position is usually thought of as at least partly a joke, but, as I’ve mentioned before, it’s really just an expression of sound Bayesian reasoning. New evidence causes you to update your prior beliefs. Your new level of belief in any given proposition is determined by both your prior belief and the evidence. It’s not just OK to take your prior beliefs into account; it’s mandatory.

I’m glad to see that Yglesias understands this. He attributes his understanding in part to Nate Silver’s book, so I guess Silver is also sound on this subject. (I haven’t read Silver’s book, so I can’t confirm this, but it doesn’t surprise me.)

 

I don’t understand the economics of scientific publishing

Nature has a bunch of feature articles in the current issue on the future of scientific publishing. I was particularly interested in the one called Open access: The true cost of science publishing (possibly paywalled). The idea of “open access” — meaning that people should be able to access scientific publications for free, especially because in many cases their tax dollars funded the research — has been getting a lot of traction lately (and rightly so in my opinion).

If access is free, meaning that libraries no longer have to pay (often exorbitant) subscription costs, then the question naturally arises of who does pay the costs of publishing and disseminating research. One answer is that authors should pay publication costs, presumably out of the same sources of funding that supported the original research. To decide how well that’ll work, you need some sort of notion of what the costs are likely to be. Here’s one answer, from the Nature article:

The right column is the one that interests me: suppose that we have open access and publish only online.

The gray boxes at the top represent the cost in referees’ time. Since referees aren’t generally paid, nobody explicitly or directly pays that cost. Not counting that, they estimate a cost of about $2300 per published article, with the biggest chunk (about $1400) coming from “Article processing,”

Administering peer review (assuming average rejection rate of 50%); editing; proofreading; typesetting; graphics; quality assurance.

Let me take these one at a time.

Administering peer review: This is certainly a legitimate expense: although referees are not paid, journals do have paid employees to deal with this. I wouldn’t have guessed that this cost was a significant fraction of $1400, but I have no actual data.

  • Editing: The journals I deal with do extremely minimal editing.
  • Proofreading: I have to admit, they generally do a good job with this.
  • Typesetting: The journal typically converts the LaTeX file I give them into some other format, but (a) it’s not clear to me that they have to do that — the final product isn’t noticeably better than the original LaTeX-formatted document (and is often considerably worse), and (b) surely this is largely automated.
  • Graphics: I provide the graphics. The journal doesn’t do anything to them. (In cases where a journal does help significantly with production of graphics, I certainly agree that that’s a service that should be paid for.)
  • Quality assurance: I have to punt on this. I don’t know what’s involved.

I have trouble seeing how the sum total of these services is worth $1400 per article. I tried to look in some of the sources cited in the article (specifically, these two)  for a more detailed cost breakdown, but I didn’t find anything that helped much.

Peter Coles, who’s been exercised about this for longer than I have, says

Having looked carefully into the costs of on-line digital publishing I have come to the conclusion that a properly-run, not-for-profit journal, created for and run by researchers purely for the open dissemination of the fruits of their research can be made sustainable with an article processing charge of less than £50 per paper, probably a lot less.

I have no idea if he’s right about this, but I do find the thousands-of-dollars-per-paper estimates to be implausible. I mean what I say in the title of this post, though: I just don’t understand the economics here.

Peter proposes to create a new low-cost, not-for-profit, open journal of astrophysics. I hope he does it, and I hope it succeeds (and as I’ve told him I’ll be glad to help out with refereeing, etc.).