Archive for the ‘Uncategorized’ Category

Multitalented students

Thursday, December 11th, 2014

At a liberal-arts college like the University of Richmond, students are encouraged (and even to some extent required) to pursue interests beyond their primary field of study. This does come with costs — our students do less advanced coursework in their discipline than a student who graduates from, say, a European university — but on balance I really like it.

In the past week, I’ve seen a couple examples of students in my department doing exciting, non-physics-related things.

On Sunday, I went to the Christmas service of lessons and carols at the University Chapel. Two student choirs, the Womens’ Chorale and the mixed-gender Schola Cantorum, performed a beautiful and highly varied set of Christmas vocal music. About half a dozen students I’ve had in advanced physics classes (Isaac Rohrer, Joe Kelly, Grace Dawson, Kelsey Janik, Ed Chandler, and I’m pretty sure some more I’m forgetting) were among the performers.

Just this morning, there was a piece on our local public radio station about an interdisciplinary art / archaeology / ecology project in which they consider and modify the environment of a university parking lot in a variety of ways. One of the students interviewed in the piece, David Ricculli, is a physics major who worked in my lab. Another, Kelsey Janik, is not a physics major but has taken several of our advanced courses, as well as my first-year seminar, and is also one of the singers.

In past years, I’ve seen our physics students display art work in on-campus exhibitions, perform in a huge variety of theatrical and musical events, and present talks on topics like monetary policy in ancient Rome. I really enjoy seeing the range of things they can do.

Training the next generation of nitpickers

Monday, October 20th, 2014

One of my students just submitted his first bug report to Wolfram. I’m so proud.

This is arguably not a bug, but it’s certainly unexpected, nonstandard, and undesirable behavior.  Wolfram Alpha is calculating the norm of a vector as the square root of the sum of the squares of the vector’s components (i.e., the usual Pythagorean relation). But when the vector has complex numbers, that’s not the right thing to do: you have to use absolute squares. Otherwise, you get absurd results like these, and your norm isn’t even a norm.

 

Lamar Smith actually is going after peer review

Tuesday, October 14th, 2014

Last year, scientists and science writers got worked up over a bill proposed by Representative Lamar Smith (Republican of Texas) that, it was claimed, constituted an attack on peer review of grants at the National Science Foundation. I thought that that attack was silly. The proposed law, while certainly not a good idea, would have had little or no effect on the peer review process. I still think that that diagnosis was correct.

To repeat something else I said at the time, even if this bill is mostly harmless, that doesn’t refute the claim that Smith is an enemy of science (he certainly is), and it doesn’t rule out the possibility that he does want to go after peer review in other ways.

Since I was (sort of, I guess) defending Smith before, I feel like I should point out that he has been meddling in the NSF review process lately in ways that bother me considerably more than that proposed legislation. Science and io9 have pieces that are worth reading on the subject.

Smith has made a list of grants that he doesn’t like and has had staffers examine the process by which these grants are reviewed. There doesn’t seem to be any question that Smith chose to go after these grants because he didn’t like  their titles and brief descriptions. In other words, as Representative Eddie Bernice Johnston (Democrat of Texas) put it in a letter to Smith,

 The plain truth is that there are no credible allegations of waste, fraud, or abuse associated with these 20 awards. The only issue with them appears to be that you, personally, think that the grants sound wasteful based on your understanding of their titles and purpose. Seeking to substitute your judgment for the determinations of NSF’s merit review process is the antithesis of the successful principles our nation has relied on to make our research investment decisions. The path you are going down risks becoming a textbook example of political judgment trumping expert judgment.

Smith argues that Congress has the duty to oversee how NSF is spending its money, which is undoubtedly true. But it makes no sense to do that by picking individual awards based on their titles and having people with no expertise try to evaluate their merits. And in the process, actual harm can be done, particularly if the anonymity of the peer review process is compromised, as Johnston claims it has been in her letter. (I have not examined Johnston’s allegation in detail.)

In case it’s not obvious, let me make clear that anonymity does matter. As an untenured assistant professor, I participated on an NSF review panel that gave a negative recommendation to a proposal from one of the biggest names in my field (someone who could surely torpedo my career). Among the proposals recommended for funding by that panel were some stronger proposals by young, relatively unknown researchers. I hope that I would have made the same recommendation if I had not been anonymous, but I’m not at all sure that I would have.

Smith’s actual agenda seems to be that certain categories of proposals (largely in the social sciences) should be eliminated from NSF funding. If he wants to propose that straightforwardly and try to pass a law to that effect, he has the right to do so. But Johnston’s exactly right that interfering with the peer review process is not the way to go after this goal. In my experience, NSF peer review works remarkably well. Having individual members of Congress examining individual proposals is certainly not going to improve that system.

In one way, Smith’s actions fit into a long tradition of politicians railing against wasteful-sounding research grants. William Proxmire had his “Golden Fleece” awards way back in the 1970s. Then there was this tweet from John McCain:

That was actually about an earmark, not a peer-reviewed grant, so it raises quite different issues about the funding process, but as an example of a thoughtless critique of science, it fits right in. (Astronomy is a significant industry in Hawaii, and  astronomy jobs are in fact jobs.)  What Smith’s doing is different from these, because he’s using the investigative machinery of Congress rather than just bloviating.

 

Atheists who believe in E.T.

Thursday, October 2nd, 2014

According to a press release from Vanderbilt, atheists are more likely than members of various religions to believe in the existence of extraterrestrial life:

Belief in extraterrestrials varies by religion

  • 55 percent of Atheists
  • 44 percent of Muslims
  • 37 percent of Jews
  • 36 percent of Hindus
  • 32 percent of Christians

I heard about this via a blog hosted at the Institute of Physics. The writer expresses surprise at the finding:

Apparently, the people most likely to believe in extraterrestrial life are…atheists. More than half (55%) of the atheists in the poll professed a belief in extraterrestrials, compared with 44% of Muslims, 37% of Jews, 36% of Hindus and just 32% of Christians.
Without information about how many people were polled, or how they were selected, it’s hard to know how seriously to take these results. The press release also didn’t say how the question was phrased, which is likewise pretty important. After all, believing that we are unlikely to be alone in a vast universe is very different from believing that little green men gave you a ride in their spaceship last Tuesday. But even so, it seems odd that atheists – a group defined by their lack of belief in a being (or beings) for which there is no good scientific evidence – are so willing to believe in the existence of extraterrestrials. Because, of course, there’s no good evidence for them, either.

I agree that the lack of information about polling methodology is annoying. (The press release refers to a book that’s not out yet, and I can’t find any other publications by this author that contain the results.) But the last part of this quote is just silly. There certainly is evidence for the existence of extraterrestrial life, and it’s not at all unreasonable for a rationalist (assuming, for the moment, the author’s implicit equation of atheism with rationalism) to believe in it.

In particular, we know that there are a huge number of planets like Earth out there. There’s considerable evidence that that number is unbelievably large (i.e., 10 to some large power), and it might even be infinite. Furthermore, we know that in the one instance of an Earthlike planet that we’ve studied in detail, life arose almost as soon as it could have. Those facts constitute strong evidence in favor of the idea that extraterrestrial life exists.

Of course that’s not a proof (in the sense of pure mathematics or logic) that life exists, but presumably “belief in” something requires only (probabilistic) evidence, not literal mathematical proof. (If mathematical certainty were required for belief, then the list of things a rational person should believe in would be quite short.)

I don’t think it’s the least bit surprising that atheists are more likely than theists to believe in extraterrestrial life. That’s exactly what I would have predicted. After all, some major religious traditions are based on the idea that God created the Universe specifically for us humans. A natural consequence of that idea is that we humans are the only living beings out there. On the other hand, someone who doesn’t believe in such a tradition is far more likely to believe that life is a random occurrence that happens with some probability whenever conditions are right for it. A natural consequence of this belief is that life exists elsewhere.

 

 

 

To dust we return

Monday, September 22nd, 2014

In case you haven’t heard, the people behind the Planck satellite have released their analysis of the region of the sky observed by BICEP earlier this year. They find higher levels of dust than those found in BICEP’s foreground models. In fact, the amount of dust is large enough to completely explain BICEP’s detection.

This doesn’t rule out the possibility that there is some cosmological signal in the BICEP data, but it does mean there’s no strong evidence for such a signal.

I should disclose that I haven’t read the Planck paper yet; I’ve just skimmed the key sections. But at a quick glance the analysis they’ve done certainly looks sensible, and for a variety of reasons I’d be surprised if they got this wrong. Of course, I already thought there was significant reason to doubt the original interpretation of the BICEP results.

I don’t have much more to say, so here are some links: Peter Coles, Sean Carroll, BBC.

Actually, I will make one quick meta observation. Some people are once again castigating the BICEP team for going public with this result prematurely. I think that that criticism is largely misguided. They may well be subject to fair criticism for getting the analysis wrong, of course, but that’s different from saying that they shouldn’t have made it public. I’m fine with people seeing the process by which science gets done, which includes everyone scrutinizing everyone else’s work.

 

GPA puzzles

Friday, September 5th, 2014

A colleague pointed me to an article by Valen Johnson called An alternative to traditional GPA for evaluating student performance, because the article takes a Bayesian approach, and he knew I liked that sort of thing.

Johnson addresses the problem that a student’s grade point average (GPA), the standard measure of academic quality in US educational institutions, doesn’t necessarily give fair or useful results. Some instructors, and even some entire disciplines, on average grade higher than others, so some students are unfairly penalized / rewarded in their GPAs based on what they choose to study.

To illustrate the problem, Johnson uses an example taken from an earlier paper by Larkey and Caulkin. I’d never seen this before, and I thought it was cute, so I’m passing it on.

Imagine that four students take nine courses, receiving the following grades:

In this scenario, every individual course indicates that the ranking of the students is I, II, III, IV (from best to worst). That is, in every course in which students I and II overlap, I beats II, and similarly for all other pairs. But the students’ GPAs put them in precisely the opposite order.

This is a made-up example, of course, but it illustrates the idea that in the presence of systematic differences in grading standard, you can get anomalous results.

This example tickles my love of math puzzles. If you’d asked me whether it was possible to construct a scenario like this, I think I would have said no.

There are obvious follow-up questions, for those who like this sort of thing. Could you get similar results with fewer courses? If you had a different number of students, how many courses would you need to get this outcome?

I know the answer for the case of two students. If you allow for courses with only one student in them, then it’s easy to get this sort of inversion: have the students get a C+ and a C respectively in one course, and then give student II an A in some other course. If you don’t allow one-student courses, then it’s impossible. But as soon as you go up to three students, I don’t think the answer is obvious at all.

As I said, I was mostly interested in this curious puzzle, but in case you’re curious, here are a few words about the problem Johnson is addressing. I don’t have much to say about it, because I haven’t studied the paper in enough detail.

Some people have proposed that a student’s transcript should include statistical information about the grade distribution in each of the student’s courses, so that anyone reading the transcript will have some idea of what the grade is worth. For what it’s worth, that strikes me as a sensible thing to do, although getting the details right may be tricky.

That only solves the problem if the person evaluating the student (prospective employer, graduate program, or the like) is going to take the time to look at the transcript in detail. Often, people just look at a summary statistic like GPA. Johnson proposes a way of calculating a quantity that could be considered an average measure of student achievement, taking into account the variation in instructors’ grading habits. Other people have done this before, or course. Johnson’s approach is different in that it’s justified by Bayesian probability calculations from a well-specified underlying model, as opposed to more-or-less ad hoc calculations.

I’m philosophically sympathetic to this approach, although some of the details of Johnson’s calculations seem a bit odd to me. I’d have to study it much more carefully than I intend to to say for sure what I think of it.

 

Vaccines are still good for you

Friday, August 29th, 2014

People seem to have been talking about some new reports that claim (yet again) a connection between vaccines and autism. The latest versions go further, alleging a cover-up by the CDC. The most important thing to know about this is that the overwhelming scientific consensus remains that vaccines are not linked to autism. They do, on the other hand, prevent vast amounts of suffering due to preventable diseases. The anti-vaccine folks do enormous harm.

(Although I have a few other things to say, the main point of this piece is to link to an excellent post by Allen Downey. The link is below, but it’s mixed in with a bunch of other stuff, so  I thought I’d highlight it up here.)

The usual pro-science people (e.g., Phil Plait) have jumped on this most recent story, stating correctly that the new report is bogus. They tend to link to two articles explaining why, but I’d rather steer you toward a piece by my old friend Allen Downey. Unlike the other articles, Allen explains one specific way in which the new study is wrong.

The error Allen describes is a common one. People often claim that a result is “statistically significant” if it has a “p-value” below 5%. This means that there is only a 5% chance of a false positive — that is, if there is no real effect, you’d be fooled into thinking there was an effect 5% of the time. Now suppose that you do 20 tests. The odds are very high in that case that at least one of them will be “significant” at the 5% level. People often draw attention to these positive results while sweeping under the rug the other tests that didn’t show anything. As far as I can tell, Allen’s got the goods on these guys, demonstrating convincingly that that’s what they did.

The other pieces I’ve read debunking the recent study have tended to focus on the people involved, pointing out (correctly, as far as I know) that they’ve made bogus arguments in the past, that they have no training in statistics or epidemiology, etc. Some people say that you shouldn’t pay any attention to considerations like that: all that matters is the content of the argument, and ad hominem considerations are irrelevant. That’s actually not true. Life is short. If you hear an argument from someone who’s always been wrong before, you might quite rationally decide that it’s not worth your time to figure out why it’s wrong. Combine that with a strong prior belief (tons of other evidence have shown no vaccine-autism link), and perfectly sound Bayesian reasoning (or as I like to call it, “reasoning”) tells you to discount the new claims. So before I saw Allen’s piece, I was pretty convinced that the new results were wrong.

But despite all that, it’s clearly much better if someone is willing to do the public service of figuring out why it’s wrong and explaining it clearly. This is pretty much the reason that I bothered to figure out in detail that evolution doesn’t violate the laws of thermodynamics: there was no doubt about the conclusion, but because the bogus argument continues to get raised, it’s good to be able to point people towards an explanation of  exactly why it’s wrong.

So thanks, Allen!

 

 

Joggins Fossil Institute does the right thing

Friday, August 22nd, 2014

As I wrote a few days ago, I sent a note to the people who run Joggins Fossil Cliffs in Nova Scotia complaining that they distribute pseudoscientific crystal-healing nonsense along with some items for sale in their gift shop. I got a very prompt reply saying

Thank you for the feedback on your experience at the Joggins Fossil Cliffs.

Excellent to hear that you and your wife enjoyed your time here.

We have removed the documentation that you referenced.

Good for them!

As I mentioned before, this place is worth a visit if you’re in the area. There are cool fossils to see, and with this one exception (now apparently fixed), they did a very good job of explaining things.

 

 

 

Someone doesn’t understand probabilities

Thursday, August 21st, 2014

I know: as headlines go, this one is not exactly Man Bites Dog. Let me be a bit more specific. Either the New York Times or trial lawyers don’t understand probability. (This, incidentally, is a good example of the inclusive “or”.)

The Times has an interactive feature illustrating the process by which lawyers decide whether to allow someone to be seated on a jury. For those who don’t know, in most if not all US courts, lawyers are allowed to have potential jurors stricken from jury pools, either for cause, if there’s evidence that a juror is biased, or using a limited number of “peremptory challenges” to remove people that the lawyer merely suspects will be unfavorable to his or her side. The Times piece asks you a series of questions and indicates how your answers affect the  lawyers’ opinion about you in a hypothetical lawsuit by an investor suing her money manager for mismanaging her investments.

The first two questions are about your job and age. As a white-collar worker, I’m told that I’d be more likely to side with the defendant, but the fact that I’m over 30 makes me more likely to favor the plaintiff. A slider at the top of the screen indicates the net effect:

So far so good. Question 3 then asks about my income. Here are the two possible outcomes:

So if I’m high-income, there’s no effect, but if I’m low-income, I’m more likely to side with the plaintiff. This is logically impossible. If one answer shifts the probability one direction, the other answer must shift it the other direction (by some nonzero amount).

Before the lawyers found out the answer, they knew that I was either low-income or high-income. (A waggish mathematician might observe that the possibility that my income is exactly $50,000 is not included in the two possibilities. This is why no one likes a waggish mathematician.) The lawyers’  assessment of me before asking the question must be a weighted average of the two  subsequent possibilities, with weights given by their prior beliefs about what my income would turn out to be. For instance, if they thought initially that there was a 70% chance that I’d be in the high-income category, then the initial probability should have been 0.7 times the high-income probability plus 0.3 times the low-income probability.

That means that if one answer to the income question shifts the probability toward the plaintiffs, then the other answer must shift the probability in the other direction.

So either the lawyers the reporter talked to are irrational or the reporter has misunderstood them. For what it’s worth, my money is on the first option. Lots of people don’t understand probabilities, but it seems likely to me that the Times reporters would have asked these questions straightforwardly and accurately reported the answers they heard from the lawyers they talked to.

If that’s true, it seems like it should present a money-making opportunity for people with expertise in probability. Lawyers who hired such people as consultants would presumably do a better job at jury selection and win more cases.

Curmudgeonliness

Wednesday, August 20th, 2014

Update: Got a very nice and very prompt note back from the people who run the place. Apparently they’ve removed this material.

My wife and I just got back from a very nice vacation in Nova Scotia, which is very beautiful (and much cooler than Richmond in August). Among other things (such as rafting on the tidal bore in the Bay of Fundy, which I highly recommend), we visited the Joggins Fossil Cliffs, a UNESCO World Heritage Site where, as the name suggests, you can see tons of fossils. The site includes both a museum and a stretch of beach you can walk along and spot fossils in their natural habitat, so to speak. There are guides to show you things and help you figure out what you’re seeing. On the whole, it’s quite interesting and educational. If you’re nearby, it’s definitely worth a visit.

The site is run by a nonprofit educational organization. As usual, they get part of their revenue from a gift shop. Among the things you can buy in the gift shop are pretty polished stones.

So far so good. Now for the curmudgeonliness. The polished stones are accompanied by this pamphlet.

As I’m sure I don’t need to tell anyone who’s reading this, the last sentence of each description is complete nonsense. Stones and crystals do not have any effect on the human psyche.

I understand that the organization needs to raise money, but is it too much to ask that they refrain from actively promoting pseudoscience in doing so? The gift shop does not stock Creationist books that claim the Earth is 6000 years old, presumably because to do so would undermine their educational mission. This may be somewhat different in degree but not at all different in kind.

This sort of thing might seem harmless, but it’s not. People really believe in things like this. If they didn’t, there wouldn’t be Web sites like healingcrystals.com (I’d rather not link to it) that will sell you crystals to cure hundreds of different ailments. Look at this screen shot, for instance.

This is a link to 728 items you can buy that purport to help you if you have cancer but in fact do nothing. People with cancer (among other things) are being fleeced and are being given false hope by this sort of nonsense. For a science educator to give any sort of seal of approval to this is not OK.

As my colleague Matt Trawick pointed out, the last item on the list is particularly interesting, in a Catch-22 sort of way. Suppose that you buy some sodalite and it does in fact cause you “to become logical and rational.” Would you then go ask for your money back?

By the way, I’ve sent a note to the organization that runs the Fossil Cliffs outlining my concern. I’ll post something if I hear anything back. (See Update at the top.)