Want to come work here?

I’m pleased to report that the University of Richmond physics department has an opening for a tenure-track faculty member. Research specialty is wide open. Here’s the text of the job ad, which will be going out on Physics Today soon.

If you’re looking for a faculty job at a place where both teaching and research are valued, UR is a great place to work. Please pass the word about this position on to anyone you think would be interested.

 

Faculty Position in Physics

The Department of Physics at the University of Richmond invites applications for a tenure-track faculty position as an assistant professor in physics to begin in August 2013 (for exceptional candidates an appointment at a more senior level might be considered). Applications are encouraged from candidates in all sub-fields of physics, both theory and experiment, but applications from candidates whose scholarship complements existing research areas in the department (biophysics, cosmology, low- and medium-energy nuclear physics and surface physics) may receive particular attention. The successful candidate is expected to have demonstrated a keen interest and ability in undergraduate teaching and to maintain a vigorous research program that engages undergraduates in substantive research outcomes. Candidates must possess a doctoral degree in physics prior to appointment.

Candidates should apply online at the University of Richmond Online Employment website (https://www.urjobs.org) using the Faculty (Instructional/Research) link. Applicants are asked to submit a cover letter, a current curriculum vitae with a list of publications, a statement of their teaching interests and philosophy, evidence of teaching effectiveness (if available), a description of current and planned research programs, and the names of three references who will receive an automated email asking them to submit their reference letters to this web site. Review of applications will commence November 1, 2012 and continue until the position is filled.

The University of Richmond is a highly selective private university with approximately 3000 undergraduates located on a beautiful campus six miles west of the heart of Richmond and in close proximity to the ocean, mountains, and Washington D.C. The University of Richmond is committed to developing a diverse workforce and student body and to being an inclusive campus community. We strongly encourage applications from candidates who will contribute to these goals. For more information please see the department’s website at http://physics.richmond.edu or contact Prof. C.W. Beausang, Chair, Department of Physics, (email: cbeausan@richmond.edu).

Impact factors

My last post reminded me of another post by Peter Coles that I meant to link to. This one’s about journal impact factors. For those who don’t know, the impact factor is a statistic meant to assess the quality or importance of a scholarly journal. It’s essentially the average number of citations garnered by each article published in that journal.

It’s not clear whether impact factors are a good way of evaluating the quality of journals. The most convincing argument against them is that citation counts are dominated by a very small number of articles, so the mean is not a very robust measure of “typical” quality. But even if the impact factor is a good measure of journal quality, it’s clearly not a good measure of the quality of any given article. Who cares how many citations the other articles published along with my article got? What matters is how my article did. Or as Peter put it,

The idea is that if you publish a paper in a journal with a large [journal impact factor] then it’s in among a number of papers that are highly cited and therefore presumably high quality. Using a form of Proof by Association, your paper must therefore be excellent too, hanging around with tall people being a tried-and-tested way of becoming tall.

But people often do use impact factors in precisely this way. I did it myself when I came up for tenure: I included information about the impact factors of the various journals I had published in in order to convince my evaluators that my work was important. (I also included information about how often my own work had been cited, which is clearly more relevant.)

Peter’s post is based on another blog post by Steven Curry, which ends with a rousing peroration:

  • If you include journal impact factors in the list of publications in your cv, you are statistically illiterate.
  • If you are judging grant or promotion applications and find yourself scanning the applicant’s publications, checking off the impact factors, you are statistically illiterate.
  • If you publish a journal that trumpets its impact factor in adverts or emails, you are statistically illiterate. (If you trumpet that impact factor to three decimal places, there is little hope for you.)
  • If you see someone else using impact factors and make no attempt at correction, you connive at statistical illiteracy.

I referred to impact factors in my tenure portfolio despite knowing that the information was of dubious relevance, because I thought that it would impress some of my evaluators (and even that they might think I was hiding something if I didn’t mention them). Under the circumstances, I plead innocent to statistical illiteracy  but nolo contendere to a small degree of cynicism.

To play devil’s advocate, here is the best argument I can think of for using impact factors to judge individual articles: If an article was published quite recently, it’s too soon to count citations for that article. In that case, the journal impact factor provides a way of predicting the impact of that article.

The problem with this is that the impact factor is an incredibly noisy predictor, since there’s a huge variation in citation rates for articles even within a single journal (let alone across journals and disciplines). If you’re on a tenure and promotion committee, and you’re holding the future of someone’s career in your hands, it would be outrageously irresponsible to base your decision on such weak evidence. If you as an evaluator don’t have better ways of judging the quality of a piece of work, you’d damn well better find a way.

Kuhn

Peter Coles has a nice post on Thomas Kuhn’s place in the philosophy of science. Many people seem to regard Kuhn’s book The Structure of Scientific Revolutions as, well, revolutionary in its effect on how we think about the nature of scientific progress and its relation to objective truth.

I confess that I never really got the point of Structure of Scientific Revolutions, so I’m glad to see that Peter’s on the same page. He goes further than I’d ever thought of going, placing Kuhn on a continuum leading from Hume and Popper through to the clownish yet pernicious writings of Feyerabend. I’m not sure I’d go so far as to hang Feyerabend around Kuhn’s neck, but maybe Peter’s right.

Anyway, in addition to putting down Kuhn et al.’s vision of what science is, Peter advances his own view, which is 100% right, in my opinion. I tried to say much the same thing in my own way a few years ago, but Peter’s version is probably better.

I don’t have anything to say about the Mars landing

that’s anywhere near as good as this.

 

 

The news these days is filled with polarization, with hate, with fear, with ignorance. But while these feelings are a part of us, and always will be, they neither dominate nor define us. Not if we don’t let them. When we reach, when we explore, when we’re curious – that’s when we’re at our best. We can learn about the world around us, the Universe around us. It doesn’t divide us, or separate us, or create artificial and wholly made-up barriers between us. As we saw on Twitter, at New York Times Square where hundreds of people watched the landing live, and all over the world: science and exploration bind us together. Science makes the world a better place, and it makes us better people.

This definitely increases the awesome.