I don’t know why there is something rather than nothing, and neither does Stephen Hawking

Over on his excellent blog, In the Dark, Peter Coles quotes Stephen Hawking saying,

Because there is a law such as gravity, the universe can and will create itself from nothing.

He then asks

Huh? I can’t make sense of it at all. Is it just me that finds it entirely devoid of either logic or  meaning?

He has a poll where you can vote on whether the statement is meaningful.

I voted, and then I wrote a comment explaining my vote. Having written it, I figured I might as well throw it up here, so that two or three more people might see it:

I find the intended meaning of the statement tolerably clear: given that there are certain laws of nature, including gravity (among other things such as quantum mechanics), a vacuum state (“nothing”) can and will evolve into a state containing a universe like ours.

That strikes me as meaningful and quite possibly even true. As a piece of science communication to the general public, though, it’s counterproductive. In context, it’s clear that Hawking means to claim this as an answer to hoary old questions of the “why is there something rather than nothing” variety, and it doesn’t do that. If you’re the sort of person who’s inclined to be bothered by questions of that sort, you’ll be just as bothered after understanding this claim as you were before. You’ll just want to know why there was a vacuum state lying around obeying these particular laws of physics.

Similarly, this argument certainly doesn’t prove the non-existence of God, as Hawking seems to be claiming.

Scientists harm our brand when we make overly broad claims about what science can “prove.” Hawking should know better.

Scientists who try to explain things to the general public are on the side of the (secular) angels, but it drives me crazy when they make overly grandiose claims, either about the science itself or about its philosophical interpretation. Every time a scientist does this, it erodes the credibility of the entire profession.

Math journal publishes computer-generated fake paper

Someone calling herself Professor Marcie Rathke of the University of Southern North Dakota at Hoople submitted an article to the journal Advances in Pure Mathematics. The title and abstract:


Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE

Let ρ=A. Is it possible to extend isomorphisms? We show that D′ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway-d’Alembert.

As it turns out, this paper was produced by a piece of software called Mathgen, which randomly generates nonsensical mathematics papers.

The response from the journal:


Dear Author,

Thank you for your contribution to the Advances in Pure Mathematics (APM). We are pleased to inform you that your manuscript:

ID : 5300285
TITLE : Independent, negative, canonically Turing arrows of equations and problems in applied formal PDE
AUTHORS :Marcie Rathke

has been accepted. Congratulations!

Anyway, the manuscript has some flaws are required to be revised :

(1) For the abstract, I consider that the author can’t introduce the main idea and work of this topic specifically. We can’t catch the main thought from this abstract. So I suggest that the author can reorganize the descriptions and give the keywords of this paper.
(2) In this paper, we may find that there are so many mathematical expressions and notations. But the author doesn’t give any introduction for them. I consider that for these new expressions and notations, the author can indicate the factual meanings of them.
(3) In part 2, the author gives the main results. On theorem 2.4, I consider that the author should give the corresponding proof.
(4) Also, for proposition 3.3 and 3.4, the author has better to show the specific proving processes.
(5) The format of this paper is not very standard. Please follow the format requirements of this journal strictly.

Please revised your paper and send it to us as soon as possible.

In case anyone’s wondering, the fact that the paper is nonsense is utterly self-evident to anyone who knows anything about mathematics. The software does an excellent job mimicking the structure and look of a mathematics article, but the content is gibberish.

The obvious comparison here is to the notorious Sokal hoax of the 1990s. The physicist Alan Sokal got an article accepted to the cultural studies journal Social Text, despite the fact that the article was a pastiche of nonsense Sokal concocted to look like trendy postmodern academic writing.

Sokal and his fans think that his hoax showed that the entire academic enterprise of “science studies” was vacuous garbage. I never thought that the hoax was very strong evidence for this hypothesis (which is not, of course, to say that the hypothesis is false). It showed without a doubt that the editors of Social Text were utter failures at their job, but not necessarily any more than that.

And pretty much the same is true of the current hoax. It doesn’t show that the entire field of mathematics is a sham; it shows that Advances in Pure Mathematics is a crappy journal in which peer review is a sham.

Incidentally, there’s one interesting difference between the two hoaxes. At the time of the Sokal hoax, Social Text did not have peer review as we generally understand that term. The editors were responsible for deciding what got published, but they did not in general consult outside experts. Advances in Pure Mathematics is (nominally) peer-reviewed.



Those kids today

are lazy and shiftless. Or so says the Academic Program Committee in the astronomy department of an unnamed university. In a letter to their graduate students, the committee says, among other things,


We have received some questions about how many hours a graduate student is expected to work.  There is no easy answer, as what matters is your productivity, particularly in the form of good scientific papers.  However, if you informally canvass the faculty (those people for whose jobs you came here to train), most will tell you that they worked 80-100 hours/week in graduate school.  No one told us to work those hours, but we enjoyed what we were doing enough to want to do so.

Other people, such as Julianne Dalcanton and Peter Coles, have said everything about this that I could say and more.

For the record, I got a Ph.D. from one of the highest-ranked physics department in the US, and I certainly didn’t work 80-100 hours a week. I’m confident that most of my fellow students (including some who are now full professors at some of the best universities in the world) didn’t either. I don’t know who wrote the above letter, but I’d take an even-money bet that they didn’t work those hours either. I’m not saying their deliberately lying — maybe this is their honest recollection — but I doubt it’s the truth.


An amazing new HIV test

According to today’s New York Times,

The OraQuick test is imperfect. It is nearly 100 percent accurate when it indicates that someone is not infected and, in fact, is not. But it is only about 93 percent accurate when it says that someone is not infected and the person actually does have the virus, though the body is not yet producing the antibodies that the test detects.

You’ve got to hand it to the makers of this test: it can’t have been easy to devise a test that remains 93% accurate even in situations where it gives the wrong result. On the other hand, it’s only “nearly 100% accurate” in situations where it gives the right result, so there’s room for improvement.


This American Life is narrowcasting at me

The most recent episode of the National Public Radio institution This American Life began with two segments that could have been designed just for me. (The third segment is a story about jars of urine, which is a less precise match to my interests. If you choose to listen to the whole thing, and you don’t like stories about jars of urine, don’t say you weren’t warned.)

The introductory segment is about Galileo, Kepler, and anagrams. Just last week, I was discussing this exact topic with my class, but there were some aspects of the story I didn’t know before hearing this radio piece.

On two occasions, Galileo “announced” discoveries he’d made with this telescope by sending out anagrams of sentences describing his findings. At one point, he sent Kepler, among others, the following message:


If you successfully unscrambled it, you’d get

Altissimum planetam tergeminum observavi. 

(Don’t forget that in Latin U and V are the same letter. As Swiftus could tell you, those Romans were N-V-T-S nuts!)

If your Latin’s up to the task, you can see that Galileo was saying

I have observed the highest planet to be triplets.

The highest planet known at the time was Saturn. What Galileo had actually seen is the rings of Saturn, but with his telescope it just looked like the planet had extra blobs on each side.

(I don’t think Mr. Davey ever taught us tergeminum in Latin class, but it’s essentially “triple-twins.” If you look closely, you’ll spot Gemini (twins) in there.)

Why, you may wonder, did Galileo announce his result in this bizarre way? Apparently this wasn’t unusual at the time. In a time before things like scientific journals, it was a way of establishing priority for a discovery without actually revealing the discovery explicitly. If anyone else announced that Saturn had two blobs next to it, Galileo could unscramble the anagram and show that he’d seen it first.

Kepler couldn’t resist trying his hand at this anagram. He unscrambled it to

Salve, umbistineum geminatum Martia proles.  

He interpreted this to mean “Mars has two moons.” The exact translation’s a bit tricky, since umbistineum doesn’t seem to be a legitimate Latin word. This American Life gives the translation “Hail, double-knob children of Mars,” which is similar to other translations I’ve come across.

Of course, Mars does have two moons, but neither Kepler nor Galileo had any way of knowing that. (Nobody did until 1877.)  Kepler thought that Mars ought to have two moons on the basis of an utterly invalid numerological argument, which presumably predisposed him to unscramble the anagram in this way.

On another occasion, Galileo came out with this anagram:

Haec immatura a me iam frustra leguntur oy.

The last two letters were just ones he couldn’t fit into the anagram; the rest is Latin, meaning something like “These immature ones have already been read in vain by me.” Anyway, his intended solution was

Cynthiae figuras aemulatur mater amorum,

which translates to

The mother of loves imitates the figures of Cynthia.

That may sound a bit obscure, but anyone who could read Latin at the time would have known that Cynthia was another name for the Moon. The mother of loves is Venus, so what this is saying is that Venus has phases like the Moon.

Although not as sexy as some of Galileo’s other discoveries, the phases of Venus were an incredibly important finding: the old geocentric model of the solar system couldn’t account for them, but they make perfect sense in the new Copernican model.

Once again, Kepler tried his hand at the anagram and unscrambled it to

Macula rufa in Jove est gyratur mathem …

It actually doesn’t work out right: it trails off in the middle of a word, and if you check the anagram, you find there are a few letters left over. But if you cheerfully ignore that, it says

There is a red spot in Jupiter, which rotates mathem[atically, I guess].

As you probably know, there is a red spot in Jupiter, which rotates around as the planet rotates, so this is once again a tolerable description of something that is actually true but was unknown at the time. (Jupiter’s Great Red Spot was first seen in 1665.)

I knew about the first anagram, including Kepler’s incorrect solution.  I’d heard that there was a second anagram, but I don’t think I’d ever heard about Kepler’s solution to that one. Anyway, I love the fact pretty much the same implausible sequence of events (Kepler incorrectly unscrambles an anagram and figures out something that later turns out to be true) happened twice.

I mentioned at the beginning that the radio show had two pieces that could have been aimed just at me. Maybe I’ll say something about the second one later. Or you can just listen to it yourself.


A recent article in the Proceedings of the National Academy of Sciences presents results of a study on fraud in science. The abstract:

A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975. Retractions exhibit distinctive temporal and geographic patterns that may reveal underlying causes.

The New York Times picked up the story, and my sort-of-cousin Robert pointed out a somewhat alarmist blog post at the Discover magazine web site, with the eye-grabbing title “The real end of science.”

The blog post highlights Figure 1(a) from the paper, which shows a sharp increase in the number of papers being retracted due to fraud:


Unless you know something about how many papers were indexed in the PubMed database, of course, you can’t tell anything from this graph about the absolute scale of the problem: is 400 articles a lot or not? The sharp increase looks surprising, but even that’s hard to interpret, because the number of articles published has risen sharply over time. To me, the figure right below this one is more informative:

This is the percentage of all published articles indexed by PubMed that were retracted due to fraud or suspected fraud. In the worst years, the number is about 0.01% — that is, one article in 10000 is retracted due to fraud. That number does show a steady growth over time, by about a factor of 4 or 5 since the 1970s.

So how bad are these numbers? I think it’s worthwhile to split the question in two:

  1. Is the present-day level of fraud alarmingly large?
  2. Is the increase over time worrying?

I think the answer to the first question is a giant “It depends.” Specifically, it depends on what fraction of fraudulent papers get caught and retracted. If most frauds are caught, so that the actual level of fraud is close to 0.01%, then I’d say there’s no problem at all: we could live with a low level of corruption like that just fine. If only one case in 1000 is caught, so that 0.01% detected fraud means 10% actual fraud, then the system is rotten to its core. I’m sure the truth is somewhere in between those two, but I don’t know where in between.

I think that the author of that end-of-science blog post is more concerned about question 2 (the rate of increase of fraud over time). From the post:

Science is a highly social and political enterprise, and injustice does occur. Merit and effort are not always rewarded, and on occasion machination truly pays. But overall the culture and enterprise muddle along, and are better in terms of yielding a better sense of reality as it is than its competitors. And yet all great things can end, and free-riders can destroy a system. If your rivals and competitors and cheat and getting ahead, what’s to stop you but your own conscience? People will flinch from violating norms initially, even if those actions are in their own self-interest, but eventually they will break. And once they break the norms have shifted, and once a few break, the rest will follow.

Does the increase in fraud documented in this paper mean that we’re getting close to a breakdown of the ethos of science? I’m not convinced. First, the increase looks a lot more dramatic in the (unnormalized) first plot than in the (normalized) second one. The blog post reproduces the first but not the second, even though the second is the relevant one for answering this question.

The normalized plot does show a significant increase, but it’s hard to tell whether that increase is because fraud is going up or because we’re getting better at detecting it. From the PNAS article:

A comprehensive search of the PubMed database in May 2012 identified 2,047 retracted articles, with the earliest retracted article published in 1973 and retracted in 1977. Hence, retraction is a relatively recent development in the biomedical scientific literature, although retractable offenses are not necessarily new.

In the old days, people don’t seem to have retracted, for any reason. If the culture has shifted towards expecting retraction when retraction is warranted, then the numbers would go up. That’s not the whole story, because the ratio of fraud-retractions to error-retractions changed over that period, but it could easily be part of it.

It’s also plausible that we’re detecting fraud more efficiently than we used to. A lot of the information about fraud in this article comes from the US Government’s Office of Research Integrity, which was created in 1992. Look at the portion of that graph before 1992, and you won’t see strong evidence of an increase. Maybe fraud detections are going up because we’re trying harder to look for it.

Scientific fraud certainly occurs. Given the incentive structure in science, and the relatively weak policing mechanisms, it wouldn’t be surprising to find a whole lot of it. In fact, though, it’s not clear to me that the evidence supports the conclusion of either widespread or rapidly-increasing fraud.