Quantitative thinking and environmentalism

I like the BBC podcast More or Less, which analyzes news stories from a quantitative perspective.  It tries to teach the skill that one might call “data literacy,” that is, the ability to examine statements about data and statistics critically and logically.  I’ve unsuccessfully argued in the past that data literacy should be an explicit goal of our education system.  Until we reach that goal, we could do a lot worse than make this podcast required listening.

The most recent installment (most recent installment) began with an interview with David MacKay, the author of a book that quantitatively compares different approaches to reducing carbon emissions. Afterwards, Rebecca Willis of the Sustainable Development Commission offers a rebuttal of sorts:

David McKay’s position on nuclear power, I think, exposes what for me is one of the weaknesses of his book. His approach is to boil it all down to a giant equation … It’s not about giant equations. It’s not about which mix of electricity generation we need.  It’s essentially about how we can lead happy lives, while using less than a quarter of the carbon that we do at the moment.

This sort of talk infuriates me.  The last sentence is certainly true, but the way to get there is to figure out what works and what doesn’t, and yes, that means equations.  People who want to achieve Willis’s goal should be embracing the mindset of people like MacKay who are trying to figure these things out.

I haven’t read MacKay’s book, and I have no idea whether his calculations are right or not.  But Willis doesn’t say anything (in the above quote or elsewhere in the inverview) suggesting that the calculations are wrong — it seems to be the whole idea of calculations that bothers her.

I don’t mean to pick on Willis, but it seems to me that this attitude is common among environmentalists.  I think the problem is that, if you view bad environmental behavior as a personal moral failing, then thinking about it in merely quantitative terms seems inadequate.

The same thing comes up in discussions about the purchase of carbon offsets.  Is it OK for me to fly on a plane, if I purchase offsets to account for the associated CO2?  It seems to me that the answer to this is technical: If carbon offsets actually work (that is, if they result in the promised amount of CO2 being removed from the atmosphere, when it otherwise wouldn’t have been), then the answer is clearly yes.  Of course, it’s hard to answer that technical question!  But it seems to me that many people object to offsets, not on the grounds that they don’t work, but on the grounds that even framing the question in this way is wrong: If you view carbon emission as a sin, then offsets are morally unsavory “indulgences” you can buy to atone for the sin.  I think that this Manichaean mindset is unhelpful: what’s good in this case is what works, and calculation is the way we figure out what works.

Strunk and White

I’ll admit to being a bit of a grammar geek (in addition to several other kinds of geek), so I was interested in Geoffrey Pullum’s takedown of Strunk and White in the Chronicle of Higher Education.  I have positive impressions of Strunk and White from school, but I haven’t actually looked at it much in recent years.

Some of the review’s criticisms are kind of silly, I think:

Notice what I am objecting to is not the style advice in Elements, which might best be described the way The Hitchhiker’s Guide to the Galaxy describes Earth: mostly harmless. Some of the recommendations are vapid, like “Be clear” (how could one disagree?). Some are tautologous, like “Do not explain too much.” (Explaining too much means explaining more than you should, so of course you shouldn’t.) Many are useless, like “Omit needless words.” (The students who know which words are needless don’t need the instruction.) Even so, it doesn’t hurt to lay such well-meant maxims before novice writers.

I want to defend “Omit needless words.”  What it means is that you should constantly ask yourself as you’re writing (or better yet as you’re editing) whether you’re including unnecessary words or not.  It’s not just a matter of knowing which words are unnecessary; the more important point is that this is something you should pay attention to. This is actually one of the most useful of all S&W’s maxims, and one of the hardest to follow.

The really scathing part of the review is the stuff about grammar and usage (as opposed to style).  Pullum makes a convincing case that S&W are utterly incoherent on a lot of these points:

What concerns me is that the bias against the passive is being retailed by a pair of authors so grammatically clueless that they don’t know what is a passive construction and what isn’t. Of the four pairs of examples offered to show readers what to avoid and how to correct it, a staggering three out of the four are mistaken diagnoses. “At dawn the crowing of a rooster could be heard” is correctly identified as a passive clause, but the other three are all errors:

  • “There were a great number of dead leaves lying on the ground” has no sign of the passive in it anywhere.
  • “It was not long before she was very sorry that she had said what she had” also contains nothing that is even reminiscent of the passive construction.
  • “The reason that he left college was that his health became impaired” is presumably fingered as passive because of “impaired,” but that’s a mistake. It’s an adjective here. “Become” doesn’t allow a following passive clause. (Notice, for example, that “A new edition became issued by the publishers” is not grammatical.

Pullum’s completely right on all this. He also has great examples of sentences from S&W that manage to simultaneously violate four or five of the authors’ own rules and yet sound just fine.  (Yes, I know that’s a split infinitive.  No, I don’t care.)

The stricture against the passive voice is a constant problem for scientists, because we’re also often taught to avoid writing in the first person when describing our procedures.  So should I say “The decay rate was measured” or “I measured the decay rate”?  Either way, someone will be mad at me. Once you realize this, it’s actually kind of liberating: since someone will be mad either way, just do as you please.

By the way, my favorite usage guide is Bryan Garner’s Dictionary of Modern American Usage, which I first learned about from a lengthy review by David Foster Wallace in Harper’s in 2001 (not linkable, as far I can tell).

Thoughts on core

The University of Richmond is currently considering various options for changing the required courses for first-year students.  For over a decade now, all first-year students have been required to take a two-semester core course, which is taught in many sections by many faculty members, but which has a common syllabus in which everyone studies, talks about, and writes about the same Important Texts.  Proposals have been made to change core, possibly replacing one or both semesters of it with first-year seminars on a wide variety of different topics, so that both instructors and students could choose to study topics of particular interest to them.

I’ve had a number of discussions on these options with my colleagues, including several members of the committee charged with developing them.  In at least one case, I think I utterly failed to make a colleague understand my point of view, and since he’s a very smart guy, I conclude that I explained myself poorly.  Here’s my attempt to do better.

I agree with the general proposition that it’s a good idea to require first-year students to take rigorous, writing-intensive courses in which they engage with big, important ideas.   These are all goals of core as presently constituted.  But there’s another aspect of core, namely that all students should simultaneously engage with the same big, important ideas — that is, that all of the 30+ core sections should read the same books.  I’m completely unconvinced of the merit of this, and I think that there are big disadvantages associated with it.  Something like the freshman-seminar model, in which students in different seminars study different things, seems to me better in virtually every way.

I’ll get into detailed arguments below.  First, though, I want to make one thing very clear: I don’t think that core as it exists is a bad thing, just that it’s not the best thing we could be doing with the resources at hand.  If we were talking about replacing core with nothing, I think I’d be opposed to that replacement.  But I think we can and should replace core with something better, and I think that a set of first-year seminars would be such a thing.

To me, the main disadvantage of core is that few faculty members have expertise in all or most of the various areas studied.

I don’t think that most proponents of core dispute this fact, but many deny its importance.  The argument goes that the point of core is to “engage with” the texts, and to use them to get practice thinking about big, difficult ideas.  The instructor is not supposed to be an expert but rather a facilitator in this process of intellectual maturing, so expertise doesn’t matter.

I don’t buy this.  Presumably the texts in core were chosen because the ideas in them are actually important and worth understanding.  Those ideas are in many cases also difficult to understand.  Someone who’s spent a lot of time studying Nietzsche, or Plato, or Darwin, or many of the other core texts, will be better able to facilitate students’ understanding of these difficult ideas than someone who hasn’t.  In each case, there are fruitful modes of thought for engaging with the ideas, and other modes of thought that are not fruitful.  A first-year student engaging with these texts needs a guide who can steer them in the fruitful directions.  That is, they need an instructor who’s got expertise.

If I put in a huge amount of effort over a long time, maybe I could reach the point where I could do a barely adequate job guiding a student through Nietzsche.  But with the same amount of effort (or less) I could do a great job guiding the student through Galileo.   Which is a better use of faculty resources?  Which gives the student a better experience?

(Incidentally, people have told me that I’m selling myself short in the above statement.  I honestly don’t think I am.  I have many flaws, but a low opinion of my own intellect is not one of them.  I think I’m a pretty smart guy.  I just don’t think that being smart is enough to make up for a lifetime of not studying something.)

A related issue is that core as presently taught is largely housed in the humanities, with relatively little participation from other parts of the university.  This year, roughly half of the core instructors are from departments having to do with languages and literatures; if you combine those with history, philosophy, and the arts, you get about 3/4 of the instructors coming from the humanities broadly construed.  There’s one instructor from mathematics, one from leadership, and none from the natural sciences or business.  Maybe that’s OK, but I think it’d be better if the first-year core courses were spread out more broadly.  I don’t know for sure that that’d happen with a first-year-seminar model, but the odds have got to be better.

Now let me discuss a few of the arguments I’ve heard in favor of core:

1. The intellectual climate of the student body as a whole is enhanced, because all students campuswide can discuss the same body of work.

In principle, I guess that’s possible.  I’d like to see some evidence that it actually makes a significant difference. Do first-year students actually discuss the core texts with others who are not in their core class?  Does that add to the intellectual climate more than the alternative, which is students having a bunch of different intellectually rigorous experiences that they can discuss with their friends?  Personally, I doubt it.  I think that exciting, rigorous, demanding course work has the potential to improve the intellectual climate, but I’m not convinced that there’s significant value added in that course work being uniform across campus.

If you have actual data to suggest otherwise, please show it to me.  (If you have anecdotes, on the other hand, please don’t.  I’ll make a deal with you: I won’t mention my anecdata if you don’t mention yours.)

2.  The particular ideas in these particular texts are so important that all students must read them in order to be considered educated.

In fairness, I’ve never encountered this argument firsthand; I’ve just heard it by hearsay.  So maybe nobody really believes this.

Anyway, the problem with this argument is that there are literally hundreds of texts as important as the ones on the core syllabus.  If  a student can’t consider herself well-educated without reading, say, House of Mirth, then surely she can’t consider herself well-educated without reading Adam Smith, Galileo, Dante, Dickens, Mary Wollstonecraft, Lao-Tzu, Hume, Einstein, and so forth.  (By the way, I don’t mean to pick on House of Mirth — it’s one of my favorite novels.)  The scholarly world is full of big, important ideas.  We can have a system in which students and faculty can choose from a broad range of these ideas, while still guaranteeing that everyone is grappling with big, important ideas.

3. The fact that all core sections study the same texts acts as a sort of “quality control”: in a seminar system, it’d be harder to assure that all students were getting an equally rigorous experience.

We can if we choose impose uniformity of expectations on seminars.  We can mandate a certain amount of writing, and we can have a faculty committee vet the syllabi to decide if the topics and readings are hard and significant enough.  If we do that, I don’t see that the quality-control issues are any worse than with core as it exists now.  Once again, if there’s any non-anecdotal evidence about the degree of uniformity of core expectations, I’d be interested to hear about it.  (It’s taking all my self-control to abide by my earlier promise to keep my own anecdotal evidence to myself, by the way.)

The issue of quality control comes up in other places as well, of course.  Take the general-education requirements, for example.  In order for a course to be designated as meeting one of the field of study requirements, its course description must be approved by some faculty body, and then after that we trust our faculty colleagues to behave professionally and do what they’ve promised to do.  I don’t see why the quality-control issues are significantly different for a first-year seminar program.

To summarize, I’m strongly in favor of a demanding, writing-intensive, first-year experience for students, in which they engage with difficult, big ideas.  But I think we should choose a model for that experience in which students have a wide variety of different big ideas that they can choose from, rather than all students having to study the same thing.

Warning: Geeky humor ahead

Here are two things I thought were pretty funny:

1. A former student sent me this spoof article showing that the value of pi has changed over time.  Although it’s mostly making fun of the recent (and scientifically legitimate, by the way) studies of whether fundamental physical constants vary over time, I think I detect a hint of mockery of the old Creationist claim that the speed of light has changed over time.  (The claim used to be that the speed tended to infinity at, conveniently enough, about 6000 years ago.)

2. From Andrew Jaffe: This erratum was published in the Astrophysical Journal:

As a result of an error at the Publisher, the term "frequentist" was erroneously changed to "most frequent" throughout the article. IOP Publishing sincerely regrets this error.

You probably have to be someone like me who uses the word “frequentist” in casual conversation to find this funny.