The Chronicle of Higher Ed. on UR’s first-year seminar program

The Chronicle of Higher Education ran a piece on the University of Richmond’s first-year seminar program. (Don’t know if it’s paywalled.) I’ve taught FYS several times, and I currently serve on the committee overseeing the program, so I was naturally interested in what they have to say. I certainly urge all of my faculty colleagues to read the piece, especially those of us on the committee, which will be performing a thorough review of the program in the fall.

The piece focuses on three seminars, taught by different faculty members on very different subjects. The reporter sat in on each of the seminars multiple times and gets quite specific about what went on in class. I’d be interested to know what others think, but overall I don’t think we come off very well from these descriptions. I have no way of knowing whether the article’s description of the seminars does them justice, and I shudder to think of how my course might have come out looking under the same treatment, so I don’t mean this statement as a criticism of the three courses or their instructors; it’s just the impression I got as a reader.

In addition to descriptions of the three courses, the article makes some general observations. This struck me as the most provocative section:

One of the chief strengths of seminars is that they serve as pedagogical laboratories for faculty members, says Jennifer R. Keup, director of the National Resource Center for the First-Year Experience and Students in Transition, at the University of South Carolina.

But the variability of seminars, she says, is a weakness.

An analysis produced by the center measured the extent to which teaching practices that are generally accepted as strong were present in different types of seminars. Results suggested that those with varying content had relatively low levels of academic challenge and clarity of teaching, in part because their general quality varied so much, says Ms. Keup.

“When content is left up to the instructor, you cede a lot of control,” she says, which “makes it harder to engage and police the degree to which they’re doing them well.”

There’s no surer way to enrage faculty members than to suggest that it would be better to take control of the content of their courses away from them. And in fact, I think that the variety of course topics is a great strength of our program, particularly in comparison with its predecessor, a “core course” taught in many sections, by faculty members from many disciplines, with a common syllabus. (I wrote down some thoughts on this years ago, when we were considering making this transition. In short, I thought — and continue to think — that the old core course, in which instructors taught subjects in which they had no expertise, was something akin to educational malpractice.)

I certainly don’t deny that the problem of nonuniform levels of rigor is real. When we first adopted the first-year seminar program, people worried a lot about whether the seminars would set uniformly rigorous standards. The committee I currently serve on is tasked, among other things, with figuring out ways to assure that they do, but it’s a hard problem, and I don’t think anybody knows how best to do it.

I don’t know any details of the study referred to in the above quote, but I have doubts about whether there’s necessarily a causal link between  having courses with varying content and having courses with low / inconsistent standards. Anecdotally (for what little that’s worth), back in the old days of our core course, there was a great deal of variation in the standards set by different core instructors, although there’s no way of knowing whether it was greater than the variability in the current seminars.

I do worry that, in the first-year seminar program as it currently exists, we’re not doing enough to set uniformly high expectations, but I hope that we can figure out ways to do that while maintaining the program’s strengths — such as faculty teaching subjects in which they have expertise, and students choosing courses in subjects that they find interesting.

All-male physics departments are not necessarily more biased than other departments

I learned via my former student Brent Follin about a study performed by the American Institute of Physics on the distribution of women on the faculty of US physics departments. Inside Higher Ed has a summary.

The specific question they were looking at is whether physics departments whose faculty are 100% male are discriminating against women. From Inside Higher Ed:

More than one-third of physics departments in the United States lack a single female faculty member. That figure has been cited by some as evidence of discrimination. With women making up just 13 percent of the faculty members (assistant through full professors) in physics, could there be another explanation?

The authors, being physicists, answer this with a simulation. They show that, given that the overall percentage of physics faculty members is so low, the number of all-male departments is not surprising. To be specific, if you start with a pool of candidates that’s 13% female, and distribute them at random among physics departments (many of which have a pretty small number of faculty members), you’d get a distribution pretty similar to the actual one. In fact, you’d actually get more all-male departments under the random-distribution scenario.

Bizarrely, the original study says

Our results suggest that there is no bias against hiring women in the system as a whole.

This sentence strikes me as nearly the exact opposite of an accurate description. Here are two things that I think they could say:

  1. Our results do not show evidence either for or against the hypothesis of bias against hiring women in the system as a whole, because the study was not designed to answer this question.
  2. Our results show no evidence that all-male departments are more biased against hiring women than other departments.

Maybe it’s obvious what I mean here, but let me blather on a bit anyway.

The results show that, given the low percentage of women in the overall pool of people hired, you’d expect about as many all-male departments as there actually are (or even more). It doesn’t say anything about why the pool has so few women in the first place. That could be due to bias, or it could be due to other reasons: women might self-select out of careers in academic physics before they get to the faculty level, for instance, or women might be intrinsically less qualified than men (the latter is the “Larry Summers hypothesis,” which I emphatically do not endorse but which is a logical possibility [UPDATE: Phillip Helbig points out correctly that this is a mischaracterization of Summers. See his comment below.]). Since the study, by design, has nothing to say about why the pool skews so heavily male, it’s ridiculous to suggest that it supports any conclusion about whether there is bias in the “system as a whole.”

Here’s what the study does suggest: whatever factors are causing the overall pool to be heavily male, those factors are more or less evenly distributed across departments. If, for instance, some departments were heavily biased while others were strongly egalitarian, you’d expect women to be clustered in some departments and absent from others. The study shows that that’s not the case. If anything, the gender distribution is less clustered than you’d get by chance.

Despite my complaints, that’s an interesting fact to know. If you believe (as I do) that we could and should be doing better at getting women into physics and related fields, it makes a difference whether the problem lies in a few “bad apple” departments or is more evenly distributed. This data set suggests that it’s evenly distributed.

B modes detected!

A paper just came out describing the detection of B modes in the cosmic microwave background polarization. This is a great example of how specialized and technical science can get: to someone like me, who works on this stuff, it’s a big deal, but it takes quite a bit of effort to explain what those words even mean to a non-specialist. Peter Coles has a description. Here’s mine.

Let’s start with the cosmic microwave background, which is the oldest light in the Universe. It consists of microwaves that have been traveling through space ever since the Universe was about 1/30,000 of its present age. (If it’s made of microwaves, why did I call it light? Because microwaves are light! They’re just light with wavelengths that are too long for us to see.)

Maps of the varying intensity of this radiation across the sky give us information about what the Universe was like when it was very young. People have been studying these maps in detail for over 20 years, and were trying hard to make them for decades before that. For many years, studies of the microwave background focused on making and analyzing maps of the variations in intensity, but in recent years a lot of effort has gone into trying to map the polarization of the radiation.

Microwave radiation (like all light) consists of waves of electric and magnetic fields. Those fields can be oriented in different ways — for instance, a light wave coming right at you could have its electric field wiggling up and down or left and right. We say that this wave can be either vertically or horizontally polarized. Radiation can be unpolarized, meaning that the direction of the wiggles shifts around randomly among all the possibilities, but people predicted a long time ago that the microwave background radiation should be weakly linearly polarized, so that, in some parts of the sky, the electric fields tend to be oriented in one direction, while in other areas they tend to be oriented another way.  Lots of effort these days is going into detecting this polarization and mapping how it varies across the sky.

These polarization maps provide additional information about conditions in the young Universe, going beyond what we’ve gotten out of the intensity maps. In fact, if we can tease out just the right details from a polarization map, we can learn about entirely new phenomena that might have taken place when the Universe was extremely young. In particular, some varieties of the theory known as cosmic inflation predict that, when the Universe was very young, it was filled with gravitational waves (ripples in space). These waves would leave a characteristic imprint in the microwave background polarization. If you saw that imprint, it’d be totally amazing.

The problem is that the really amazing things you’d like to see in the polarization maps are all very very faint — much fainter than the “ordinary” stuff we expect to see. The “ordinary” stuff is still pretty cool, by the way — it’s just not as cool as the other stuff. Fortunately, we have one more trick we can use to see the exotic stuff mixed in with the ordinary stuff. There are two kinds of patterns that can appear in a polarization map, called E modes and B modes (for no particularly good reason). The E modes have a sort of mirror-reflection symmetry, and the B modes don’t. A lot of the “ordinary” polarization information shows up only in the E modes, so if you can measure the B modes, you have a better chance of measuring the exotic stuff.

That’s why people have spent a long time trying to figure out how to (a) make maps of the polarization of the microwave background radiation and (b) extract the “B modes” from those maps. The new paper announces that that goal has been accomplished for the first time.

This detection does not show anything extraordinary, like gravitational waves crashing around in the early Universe. The B modes they found matched what you’d expect to find based on what we already knew. But it shows that we can do the sorts of things we’ll need to do if we want to search for the exotica.

The fact that this experiment found “only” what we expected may sound kind of uninteresting, but that’s just because we’ve gotten a bit jaded by our success up to this point. The signal that these people found is the result of photons, produced in the early Universe, following paths through space that are distorted by the gravitational pull of all the matter in their vicinity. The fact that we can map the Universe well enough to know what distortions to expect, and then go out and see those distortions, is amazing. It’s another bit of evidence that we really do understand the structure and behavior of the Universe over incredibly large scales of distance and time.

Incidentally, the lead author on this paper, Duncan Hanson, is a former collaborator of mine. When he was an undergraduate, he and I and Douglas Scott coauthored a paper. It’s great to see all the terrific work he’s done since then.

(Just to avoid any misunderstanding, as much as I would like to, I can’t claim Duncan for the University of Richmond. Although he and I worked together when he was an undergraduate, he wasn’t an undergraduate here. He was at the University of British Columbia.)

Predictions are hard, especially about the future

Although I’m geeky in a bunch of different ways, I’ve never been a big science fiction reader. Lately, though, I’ve been trying out some of the classics of the genre. I just finished Asimov’s Foundation trilogy, which I enjoyed quite a bit. I think it gets better as it goes along: things get more morally ambiguous and interesting in the last part.

It’s hard (at least for me) to read futuristic stories without evaluating them for accuracy or plausibility. That’s not necessarily a fair form of criticism: the author’s goal is to tell a good story, regardless of whether it’s accurate. But it’s an irresistible game to play, at least for me.

In that spirit, this description of a new navigation system, just implemented in the starships of 25,000 years in the future, struck me as amusing:

Bail Channis sat at the control panel of the Lens and felt again the involuntary surge of near-worship at the contemplation of it. He was not a Foundation man and the interplay of forces at the twist of a knob or the breaking of a contact was not second nature to him. Not that the Lens ought quite to bore even a Foundation man. Within its unbelievably compact body were enough electronic circuits to pinpoint accurately a hundred million separate stars in exact relationship to each other. And as if that were not a feat in itself, it was further capable of translating any given portion of the Galactic Field along any of the three spatial axes or to rotate any portion of the Field about a center.

So this amazing device can manipulate a few gigabytes of data, applying translations and 3d rotations to it? That’d be a difficult job for your phone, but utterly trivial for, say, your XBox, let alone the high-end computers of today. The Foundation has nuclear power plants you can carry around in your pocket, but their computing expertise is stuck in the 20th century.

Asimov goes on to describe how a human operator can use this to figure out where he is in the Galaxy, by looking at the patterns of stars and matching them up by eye, in “less than half an hour.” Of course, this would, even today, be trivial to do in software in a fraction of a second.

(In fairness, I should mention the possibility that “million” is a misprint for “billion.” The number of stars in the Galaxy is of order 100 billion, as Asimov surely knew. If you make that switch, it becomes a more daunting task, although still perfectly manageable with good present-day computers.)

Again, I don’t mean this as a serious criticism of Asimov. I just thought it was amusing.

If you really want to criticize Asimov for mis-guessing about the future, you’ll find much more fertile ground in his depictions of how people behave. Asimov’s characters act an awful lot like men (pretty much always men) of the mid-20th century. They spend an astonishing amount of time reading newspapers and offering each other cigars, for instance.

One more thing. According to Wikipedia,

In 1965, the Foundation Trilogy beat several other science fiction and fantasy series (including The Lord of the Rings by J. R. R. Tolkien) to receive a special Hugo Award for “Best All-Time Series.” It is still the only series so honored. Asimov himself wrote that he assumed the one-time award had been created to honor The Lord of the Rings, and he was amazed when his work won.

I’ll reveal my geek allegiance: Asimov was right to be amazed. The Foundation books are good, but the idea of them winning in a head-to-head competition with Lord of the Rings is absurd.

Anyone speak Ukrainian?

About 17 years ago, I wrote an explanation of some stuff about black holes. To my astonishment, people still read it from time to time.

Recently I got an email from someone asking if he could translate it into Ukrainian. I said yes, and here’s the result.

Not speaking Ukrainian, I can’t vouch for the translation’s accuracy. Google Translate shows it to have a bunch of the right words in the right places, but it also has things like this:

 Beyond the horizon, the escape velocity less than the speed of light, so if we fire your rockets hard enough, we can give them, and enough energy to pita bread.

There are two different ways prinaymni viznachiti how something is bicycle.

The sun, for example, has a radius of about 700,000 km, and so, that nadmasivna black hole has a radius of only about chotiroh times Buy it, than sun.

Let A valid, that we otrimuyete in thy spaceship and send directly to the ego million solar masses black hole at the center of our galaxy. (In fact, there is some debate about the our galaxy provides a central black hole Admissible Ale, it does a lot of Denmark at the time.) Since far from the black hole, we just vimknit rockets and coast in. What’s going on?

Chi is not a black hole if viparuyetsya under me, woo something I achieve?

Anyway, if you happen to speak Ukrainian, check it out.

 

Before asking why, ask whether

For some reason, I can’t resist hate-listening to Frank Deford’s sports commentaries on NPR’s morning edition, despite the facts that (a) I’m not much of a sports fan and (b) he has an  incredibly irritating, self-important style, intended primarily to convince the listener of his own erudition.

(Incidentally, Deford is going to share a stage with our own University of Richmond President, Ed Ayers, as a recipient of a National Endowment for the Humanities Medal from President Obama. Congratulations, President Ayers!)

From his latest:

More recently, we’ve tended to excuse the virtual cavalcade of criminal actions committed by players away from the gridiron. Why, 29 NFL players have been arrested just since the Super Bowl … We need to somehow clean the Aegean stables of the stink of violence.

(Speaking of pointless erudition, and leaving myself open to pot-kettle comparisons, let me note that I suspect that an editor, rather than Deford himself, is to blame for writing “Aegean” instead of “Augean.” Deford pronounces the word with a clear “Au” sound.)

Deford starts from the premise that NFL players are thugs and criminals, based on this evidence, but he never asks the obvious question: Is 29 a large number of arrests? There are, after all, a lot of NFL players.

The answer, it turns out, is no. NFL players get arrested about half as often as the US population as a whole. If you compare to men of similar ages to NFL players, instead of to the population as a whole, the difference is much greater: NFL players are arrested about 1/5 as often as their peers. The BBC programme More Or Less has the details.

In short, we seem to be in the middle of a moral panic over NFL players’ thuggery at the moment, despite the fact that the players are disproportionately law-abiding. Odd.

Of course, this comparison isn’t perfect. For one thing, NFL players are all rich (once they’ve got their contracts — not necessarily before that, of course), and crime tends to be concentrated among the poor. If you compare arrest rates for NFL players to arrest rates for people with similar incomes, presumably you’d get a different answer.

Also, the data used by More Or Less concerns all arrests. Since the chief concern seems to be whether NFL players are disproportionately violent, it might be more relevant to look at just arrests for violent crimes. Let’s give it a try.

Combining FBI and census data, it looks like about 0.5% of young men (20-35) are arrested for a violent crime per year. There are a total of over 1600 NFL players, so if NFL players are typical of their age cohort, you’d expect about 8 players per year to be arrested for a violent crime. According to the San Diego Union-Tribune’s NFL Arrests Database, the actual number is quite close to this. I can’t quite tell exactly, because I don’t have the legal expertise to know exactly which arrests are “violent crimes” according to the FBI’s definition, but looking at the past year and counting borderline cases, I can’t make the number come out above 10.

So there’s certainly not significant evidence that NFL players are more violent than their peers based on this data set.

This is not, of course, to excuse anybody’s bad behavior. In particular, the largest category of arrests (both for NFL players and young men as a whole) are for drunk driving, which is not counted as a violent crime by the FBI but is still an appalling thing to do. But when considering what to do about problems like this, it’s better to do it on the basis of actual data rather than deciding that a certain population is disproportionately violent based on confirmation bias.

Why there aren’t more science majors

Matt Yglesias points out a National Bureau of Economic Research working paper entitled “A Major in Science? Initial Beliefs and Final Outcomes for College Major and Dropout.” The abstract:

Taking advantage of unique longitudinal data, we provide the first characterization of what college students believe at the time of entrance about their final major, relate these beliefs to actual major outcomes, and, provide an understanding of why students hold the initial beliefs about majors that they do. The data collection and analysis are based directly on a conceptual model in which a student’s final major is best viewed as the end result of a learning process. We find that students enter school quite optimistic/interested about obtaining a science degree, but that relatively few students end up graduating with a science degree. The substantial overoptimism about completing a degree in science can be attributed largely to students beginning school with misperceptions about their ability to perform well academically in science.

Yglesias’s gloss:

This is important to keep in mind when you hear people talk about the desirability of increasing the number of students with STEM degrees. To make it happen, you probably either need better-prepared 18-year-olds or else you have to make the courses easier. But it’s not that kids ignorantly major in English totally unaware that a degree in chemistry would be more valuable.

This strikes me as oversimplified in one important way. Assume for the sake of argument that the article is right about students’ reasons for switching away from science majors, and also that having more science majors would be desirable. Changing the science curriculum to retain those students is certainly a possible solution, but that doesn’t necessarily mean “mak[ing] the courses easier”; it might just mean making the courses better. Students may come out of an introductory science course filled with doubt about their ability to succeed in future courses, not because the course was too hard, but because it didn’t teach them the skills they needed.

This is not a mere hypothetical possibility: unfortunately, it’s exactly what happens in lots of courses. Around 25 years ago, when serious academic research into physics education got underway, study after study showed that students were coming out of introductory physics courses without having learned the things that their teachers thought they were teaching them. (Scores on post-instruction tests of basic conceptual understanding of Newton’s laws came out pretty much identical to pre-instruction tests.)

There are a bunch of strategies for improving the way we teach. Some instructors and institutions have implemented them; many haven’t. There’s lots of room to make things better.

In short, Yglesias seems to be suggesting that we can’t solve the problem without dumbing down the science curriculum, but that may be a false choice.

One related point: Students come out of introductory classes with widely varying levels of confidence about their ability to succeed in future science courses. That’s not surprising, of course. What I’ve found surprising over the years is how weakly correlated students’ confidence is with their actual ability. I urge the strong students in my introductory courses to go on to further study of physics, and I routinely hear them say, “I don’t think I’m any good at it.”  Again, the solution is not to make the course easier, but for me to do a better job at letting students know that they are good at it (if they are, of course).

In my (anecdotal but extensive) experience as well as those of my colleagues, this phenomenon is highly gender-linked. In a typical section of introductory physics, the three best students are women, and the three students who think they’re best are men. The student who gets an A in my class and says “I don’t think I’m any good at this” is almost always female.

Although I find the overall results of this study quite plausible, it is worth pointing out a couple of caveats. The sample size is only about 600 students. Once you start slicing that, first into students who initially expressed an interest in science, and then into the subset who got discouraged and switched, you’re quickly looking at very small numbers. Moreover, the study was performed at a single college, namely Berea College. You should probably use caution when extrapolating the results to other colleges, especially since Berea is quite different (in some excellent ways, such as focusing on students of limited economic means and educating them for free) from almost everywhere else.