Pinholes and eclipses

My friend Julie Laskaris showed me a picture of sunlight passing through leaves during a partial solar eclipse, something like this:

(Image from here.)

I remember seeing the same phenomenon during an eclipse I saw in Berkeley in 1991.

The crescent shapes occur because each little gap in the leaves acts like a pinhole camera, projecting an upside-down image of the partially-blocked Sun. If you’re going to see the eclipse in a couple of weeks, look out for this effect. You can even produce it for yourself by holding up your hands so that your fingers make small gaps that the sunlight can pass through.

I was looking around for explanations of this phenomenon, and I came across this NASA page. One thing I was very interested to learn from this page is that Aristotle noted this phenomenon:

In the fourth century BC, Aristotle was puzzled. “Why is it that when the sun passes through quadrilaterals, as for instance in wickerwork, it does not produce a figure rectangular in shape but circular?” he wrote. “Why is it that an eclipse of the sun, if one looks at it through a sieve or through leaves, such as a plane-tree or other broadleaved tree, or if one joins the fingers of one hand over the fingers of the other, the rays are crescent-shaped where they reach the earth?

I’ve read a lot of Aristotle’s writing on astronomical topics, and I’ve even inflicted him on my students from time to time, but I’d never encountered this. It turns out that it’s in the book Problems, which I’d never heard of and which some people seem to think might not be Aristotle at all.

Julie is a classicist, so it was quite fitting that her showing me this picture led me to learn something about Aristotle.

Math puzzles

My brother sent me a picture of a book that one of my nephews was planning to bring with him on a long plane flight:

503856505

I was thrilled to see this, because Martin Gardner’s books are a huge reason that I am the way I am today. I suspect that lots of math and science nerds would say the same. I love the idea of yet another generation being exposed to them.

In honor of Martin Gardner, here’s a math puzzle I just saw. It was posted on the wall of the UR math department. One of the faculty there posts weekly puzzles and gives a cookie to anyone who solves them. I don’t know if this one is still up there, but if it is, and you hurry, a cookie could be yours.

A mole is on a small island at the exact center of a circular lake. A fox is at the edge of the lake. The fox wants to catch and eat the mole; the mole wants to avoid this. The mole can swim at a steady speed. The fox can’t swim but can run around the edge of the lake, four times as fast as the mole can swim. If the mole makes it to the edge of the lake, she can very quickly burrow into the ground. Can the mole escape? (That is, can she reach a point on the edge of the lake before the fox gets to that same point?)

You can assume that the fox and mole can see each other at all times, and that they can change their speed and direction instantaneously.

This is a great Gardner-esque puzzle, because you don’t need any advanced mathematics to solve it. All you need is distance = rate x time and the rule for the circumference of a circle.

If you want something harder, you can try to figure out the minimum speed that the fox must have in order to be able to catch the mole. (That is, what would you have to change the number four to, in order to change the answer?) I think I’ve worked that one out, although I could have made a mistake.

The Neyman-Scott paradox

I’m mostly posting this to let you know about Peter Coles’s nice exposition of something called the Neyman-Scott paradox. If you like thinking about probability and statistics, and in particular about the difference between Bayesian and frequentist ways of looking at things (and who doesn’t like that?), you should read it.

You should read the comments too, which have some actual defenders of the frequentist point of view. Personally, I’m terrible at characterizing frequentist arguments, because I don’t understand how those people think. To be honest, I think that Peter is a bit unfair to the frequentists, for reasons that you’ll see if you read my comment on his post. Briefly, he seems to suggest that “the frequentist approach” to this problem is not what actual frequentists would do.

The Neyman-Scott paradox is a somewhat artificial problem, although Peter argues that it’s not as artificial as some people seem to think. But the essential features of it are contained in a very common situation, familiar to anyone who’s studied statistics, namely estimating the variance of a set of random numbers.

Suppose that you have a set of measurements x1, …, xm. They’re all drawn from the same probability distribution, which has an unknown mean and variance. Your job is to estimate the mean and variance.

The standard procedure for doing this is worked out in statistics textbooks all over the place. You estimate the mean simply by averaging together all the measurements, and then you estimate the variance as

variance

That is, you add up the squared deviations from the (estimated) mean, and divide by – 1.

If nobody had taught you otherwise, you might be inclined to divide by m instead of – 1. After all, the variance is supposed to be the mean of the squared deviations. But dividing by m leads to a biased estimate: on average, it’s a bit too small. Dividing by – 1 gives an unbiased estimate.

In my experience, if a scientist knows one fact about statistics, it’s this: divide by – 1 to get the variance.

Suppose that you know that the numbers are normally distributed (a.k.a. Gaussian). Then you can find the maximum-likelihood estimator of the mean and variance. Maximum-likelihood estimators are often (but not always) used in traditional (“frequentist”) statistics, so this seems like it might be a sensible thing to do. But in this case, the maximum-likelihood estimator turns out to be that bad, biased one, with m instead of m – 1 in the denominator.

The Neyman-Scott paradox just takes that observation and presents it in very strong terms. First, they set m = 2, so that the difference between the two estimators will be most dramatic. Then they imagine many repetitions of the experiment (that is, many pairs of data points, with different means but the same variance). Often, when you repeat an experiment many times, you expect the errors to get smaller, but in this case, because the error in question is bias rather than noise (that is, because it shifts the answer consistently one way), repeating the experiment doesn’t help. So you end up in a situation where you might have expected the maximum-likelihood estimate to be good, but it’s terrible.

Bayesian methods give a thoroughly sensible answer. Peter shows that in detail for the Neyman-Scott case. You can work it out for other cases if you really want to. Of course the Bayesian calculation has to give sensible results, because the set of techniques known as “Bayesian methods” really consist of nothing more than consistently applying the rules of probability theory.

As I’ve suggested, it’s unfair to say that frequentist methods are bad because the maximum-likelihood estimator is bad. Frequentists know that the maximum-likelihood estimator doesn’t always do what they want, and they don’t use it in cases like this. In this case, a frequentist would choose a different estimator. The problem is in the word “choose”: the decision of what estimator to use can seem mysterious and arbitrary, at least to me. Sometimes there’s a clear best choice, and sometimes there isn’t. Bayesian methods, on the other hand, don’t require a choice of estimator. You use the information at your disposal to work out the probability distribution for whatever you’re interested in, and that’s the answer.

(Yes, “the information at your disposal” includes the dreaded prior. Frequentists point that out as if it were a crushing argument against the Bayesian approach, but it’s actually a feature, not a bug.)

Ranking the election forecasters

In Slate, Jordan Ellenberg asks How will we know if Nate Silver was right?

It’s a good question. If you have a bunch of models that make probabilistic predictions, is there any way to tell which one was right? Every model will predict some probability for the outcome that actually occurs. As long as that probability is nonzero, how can you say the model was wrong?

Essentially every question that a scientist asks is of this form. Because measurements always have some uncertainty, you can virtually never say that the probability of any given outcome is exactly zero, so how can you ever rule anything out?

The answer, of course, is statistics. You don’t rule things out with absolute certainty, but you rule them out with high confidence if they fit the data badly. And “fit the data badly” essentially means “have a low probability of occurring.”

So Ellenberg proposes that all the modelers publish detailed probabilities for all possible outcomes (specifically, all possible combinations of victories by the candidates in each state). Once we know the outcome, the one who assigned the highest probability to it is the best.

In statistics terminology, what he’s proposing is simply ranking the models by likelihood. That is indeed a standard thing to do, and if I had to come up with something, it’s what I’d suggest too. In this case, though, it’s probably not going to give a definitive answer, simply because all the forecasters will probably have comparable probabilities for the one outcome that will occur.

All of those probabilities will be low, because there are lots of possible outcomes, and any given one is unlikely. That doesn’t matter. What matters is whether they’re all similar. If 538 predicts a probability of 0.8%, and Princeton Election Consortium predicts 0.0000005%, then I agree that 538 wins. But what if the two predictions are 0.8% and 0.5%? The larger number still wins, but how strong is that evidence?

The way to answer that question is to use a technique called reasoning (or as some old-fashioned people insist on calling it, Bayesian reasoning). Bayes’s theorem gives a way of turning those likelihoods into posterior probabilities, which are the probabilities that any given model is correct, given the evidence. The answer depends on the prior probabilities — how likely you thought each model was before the data came in. If, as I suspect, the likelihoods come out comparable to each other, then the final outcome depends strongly on the prior probabilities. That is, the new information won’t change your mind all that much.

If things turn out that way, then Ellenberg’s proposal won’t answer the question, but that’s because there won’t be any good way to answer the question. The Bayesian analysis is the correct one, and if it says that the posterior distribution depends strongly on the prior, then that means that the available data don’t tell you who’s better, and there’s nothing you can do about it.

 

Gerrymandering without gerrymanderers

The Washington Post has a nice graphic explaining how gerrymandering works.

imrs

If the people drawing the district lines prefer one party, they can draw the lines as on the right: they pack as many opponents as possible into a few districts, so that those opponents win those districts in huge landslides. Then they spread out their people to win the other districts by slight majorities.

The solution people generally propose is to put the district-drawing power into the hands of non-partisan people or groups. While I think this is certainly a good idea, it’s worth mentioning that you can get “unfair” results even without deliberate gerrymandering. In particular, if members of the political parties happen to cluster in different ways, then even a nonpartisan system of drawing districts can lead to one party being overrepresented.

I don’t propose to dig into the details of this as it affects US politics. Briefly, the US House of Representatives is more Republican than it “should” be: the fraction of representatives who are Republican is more than the nationwide fraction of votes for republicans in House races. Similar statements are true for various states’ US House delegations and for state legislatures, sometimes favoring the Democrats. No doubt you can find a lot more about this if you dig around a bit. Instead, I just want to illustrate with a made-up example how you can get gerrymandering-like results even if no one is deliberately gerrymandering.

To forestall any misunderstanding, let me be 100% clear: I am not saying that there is no deliberate gerrymandering in US politics. I am saying that it need not be the whole explanation, and that even if we implemented a nonpartisan redistricting system, some disparity could remain.

I should also add that nothing I’m going to say is original to me. On the contrary, people who think about this stuff have known about it forever. But a lot of my politically-aware friends, who know all about deliberate gerrymandering, haven’t thought about the ways that “automatic gerrymandering” can happen.

Imagine a country called Squareland, whose population is distributed evenly throughout a square area. Half the population are Whigs, and half are Mugwumps. As it happens, the two political parties are unevenly distributed, with more Whigs in some areas and more Mugwumps in others:
popdist1new
The bluer a region is, the more Whigs live there. But remember, each region has the same total number of people, and the total number of Whigs and Mugwumps nationwide are the same.

You ask a nonpartisan group to divide this region up into 400 Congressional districts. They’re not trying to help one party or another, so they decide to go for simple, compact districts:

popdist1gridnew

Each district has an election and comes out either Whig or Mugwump, depending on who has a majority in the local population. In this particular example, you get 197 Mugwumps and 203 Whigs. pretty good.

Now suppose that the people are distributed differently:

popdist2new

Note that the blue regions are much more compact. There’s lots of dull red area, which is majority-Mugwump, but no bright red extreme Mugwump majorities. The blue regions, on the other hand, are extreme. The result is that there’s a lot more red area than blue, even though there are equal numbers of red folks and blue folks.

Use the same districts:

popdist2gridnew

This time, the Mugwumps win 245 seats, and the Whigs get only 155. Nobody deliberately drew district lines to disenfranchise the Whigs, but it happened anyway. And the reason it happened is very similar to what you’d get with deliberate gerrymandering: the Whigs got concentrated into a small number of districts with big majorities.

Once again, I’m not saying that this is what’s happened in the US House of Representatives. On the contrary, the evidence of deliberate gerrymandering is very strong, and given the incentives, it would be quite surprising if it did not occur. But if members of one party tend to “clump” more strongly than members of the other party, then this sort of effect can certainly occur, and could form a part of the discrepancy we see.

 

UR is a place “Where Great Research Meets Great Teaching”

According to the Wall Street Journal (paywall, unfortunately).

Screenshot 2016-10-07 14.12.51Most of the various college rankings strike me as somewhat silly, but I’ll make an exception for this one, because it says something nice about my home institution, which coincides very well with my own impression. Lots of places claim to be good at both teaching and research, but my experience at UR is that we really mean it. The faculty are excited both about their scholarship and about working closely with undergraduates, and the university’s reward structure conveys that both are valued.

I always say that student-faculty research is the best thing about this place.  I’m glad the WSJ agrees.

In case you’re wondering, the data used to generate this list consisted of two pieces: number of research papers per faculty member, and a student survey asking about faculty accessibility and opportunities for collaborative learning. A school had to do well on both to make the list, although I don’t think they said exact recipe for combining them.

Eclipse!

In case you don’t know, there’s a total eclipse coming to North America next year, on August 21 to be precise. You can find lots of information about it by Googling, naturally.

You’ll only be able to see a total eclipse along a certain path through the US. I wanted to know where on the path I should go. In particular, where is the weather most likely to be clear on that day?

I asked my brother, who’s a climate scientist, where to go to find this out. He pointed me to the cloud fraction data from NASA’s MODIS TERRA/AQUA sensor, which contains whole-earth cloud cover maps. Here’s the answer:

usa

The red curve is the middle of the path of totality of the eclipse. The eclipse will be total over a band of maybe 30-50 miles on either side. The colors show average cloud cover. Black is good and white is bad.

(In case you’re dying to know the details, here they are. I grabbed the data for the past 10 years for the time around August 21. The most sensible data product to use seems to be the 8-day averages. August 21 is right on the edge of one of those 8-day windows, so I took the two eight-day windows on either side, and averaged together those 20 maps.)

Here’s another way to look at it: a graph of average cloud cover along the path of totality, across the US. The red dashed lines show state boundaries. The path clips little corners of Kansas, Illinois, and North Carolina, which I didn’t bother to mark.

 

longgraph

After making my map, I found this one, which reaches similar conclusions.

 

cloudmaps

Hotel rooms along the path of totality are already hard to find. Make your plans right away!

We still don’t know if there have been alien civilizations

Pedants (a group in which I have occasionally been included) often complain that nobody uses the phrase “beg the question” correctly anymore. It’s supposed to refer to the logical fallacy of circular reasoning — that is, of assuming the very conclusion for which you are arguing. Because the phrase is so often used to mean other things, you can’t use it in this traditional sense anymore, at least not if you want to be understood.

I’ve never found this to be a big problem, because the traditional meaning isn’t something I want to talk about very often. Until today.

The article headlined Yes, There Have Been Aliens in today’s New York Times is the purest example of question-begging I’ve seen in a long time. The central claim is that “we now have enough information to conclude that they [alien civilizations] almost certainly existed at some point in cosmic history.”

The authors use a stripped-down version of the Drake equation, which is the classic way to talk about the number of alien civilizations out there. The Drake equation gives the expected number of alien civilizations in our Galaxy in terms of a bunch of probabilities and related numbers, such as the fraction of all stars that have planets and the fraction of planets on which life evolves. Of course, we don’t know some of these numbers, particularly that last one, so we can’t draw robust conclusions.

The authors estimate that “unless the probability for evolving a civilization on a habitable-zone planet is less than one in 10 billion trillion, then we are not the first” such civilization. Based on this number, they conclude that ” the degree of pessimism required to doubt the existence, at some point in time, of an advanced extraterrestrial civilization borders on the irrational.”

Nonsense. It’s not the least bit irrational to believe that this probability is so low. We have precisely no evidence as to the value of the probability in question. Any conclusion you draw from this value is based solely on your prior (evidence-free) estimate of the probability.

I mean the phrase “evidence-free” in a precise Bayesian sense: All nonzero values of that probability are equally consistent with the world we observe around us, so no observation causes us to prefer any value over another.

They’d revoke my Bayesian card if I didn’t point out that there’s no problem with the fact that your conclusions depend on your prior probabilities. All probabilities do (with the possible exception of statements about pure mathematics and logic). But it’s absurd to say that it’s “irrational” to believe that the probability is below a certain value, when your assessment of that probability is determined entirely by your prior beliefs, with no contribution from actual evidence.

This sort of argument is occasionally known as “proof by Goldberger’s method“:

The proof is by the method of reductio ad asburdum. Suppose the result is false. Why, that’s absurd! QED.

 

Electability update

As I mentioned before, a fair amount of conversation about US presidential politics, especially at this time in the election cycle, is speculation about the “electability” of various candidates. If your views are aligned with one party or the other, so that you care more about which party wins than which individual wins, it’s natural to throw your support to the candidate you think is most electable. The problem is that you may not be very good at assessing electability.

I suggested that electability should be thought of as a conditional probability: given that candidate X secures his/her party’s nomination, how likely is the candidate to win the general election? The odds offered by the betting markets give assessments of the probabilities of nomination and of victory in the general election. By Bayes’s theorem, the ratio of the two is the electability.

Here’s an updated version of the table from my last post, giving the candidates’ probabilities:

PartyCandidateNomination ProbabilityElection ProbabilityElectability
DemocratClinton70.54463
DemocratSanders28.519.568
RepublicanBush8.53.541
RepublicanCruz13.55.40
RepublicanRubio32.51546
RepublicanTrump47.5
29.562

As before, these are numbers from PredictIt, which is a betting market where you can go wager real money.

If you use numbers from PredictWise, they look quite different:

PartyCandidateNomination ProbabilityElection ProbabilityElectability
DemocratClinton845363
DemocratSanders16850
RepublicanBush7343
RepublicanCruz8225
RepublicanRubio321341
RepublicanTrump511835

PredictWise aggregates information from various sources, including multiple betting markets as well as polling data. I don’t know which one is better. I do know that if you think PredictIt is wrong about any of these numbers, then you can go there and place a bet. Since PredictWise is an aggregate, there’s no correspondingly obvious way to make money off of it. If you do think the PredictWise numbers are way off, then it’s probably worth looking around at the various betting markets to see if there are bets you should be making: since PredictWise got its values in large part from these markets, there may be.

To me, the most interesting numbers are Trump’s. Many of my lefty friends are salivating over the prospect of his getting the nomination, because they think he’s unelectable. PredictIt disagrees, but PredictWise agrees. I don’t know what to make of that, but it remains true that, if you’re confident Trump is unelectable, you have a chance to make some money over on PredictIt.

My old friend John Stalker, who is an extremely smart guy, made a comment on my previous post that’s worth reading. He raises one technical issue and one broader issue.

The technical point is that whether you can make money off of these bets depends on the bid-ask spread (that is, the difference in prices to buy or sell contracts). That’s quite right.  I would add that you should also consider the opportunity cost: if you make these bets, you’re tying up your money until August (for bets on the nomination) or November (for bets on the general election). In deciding whether a bet is worthwhile, you should compare it to whatever investment you would otherwise have made with that money.

John’s broader claim is that “electability” as that term is generally understood in this context means something different from the conditional probabilities I’m calculating:

I suspect that by the term “electability” most people mean the candidate’s chances of success in the general election assuming voters’ current perceptions of them remain unchanged, rather than their chances in a world where those views have changed enough for them to have won the primary.

You should read the rest yourself.

I think that I disagree, at least for the purposes that I’m primarily interested in. As I mentioned, I’m thinking about my friends who hope that Trump gets the nomination because it’ll sweep a Democrat into the White House. I think that they mean (or at least, they should mean) precisely the conditional probability I’ve calculated. I think that they’re claiming that a world in which Trump gets the nomination (with whatever other events or changes go along with that) is a world in which the Democrat wins the Presidency. That’s what my conditional probabilities are about.

But as I said, John’s an extremely smart guy, so maybe he’s right and I’m wrong.