Electability update

As I mentioned before, a fair amount of conversation about US presidential politics, especially at this time in the election cycle, is speculation about the “electability” of various candidates. If your views are aligned with one party or the other, so that you care more about which party wins than which individual wins, it’s natural to throw your support to the candidate you think is most electable. The problem is that you may not be very good at assessing electability.

I suggested that electability should be thought of as a conditional probability: given that candidate X secures his/her party’s nomination, how likely is the candidate to win the general election? The odds offered by the betting markets give assessments of the probabilities of nomination and of victory in the general election. By Bayes’s theorem, the ratio of the two is the electability.

Here’s an updated version of the table from my last post, giving the candidates’ probabilities:

PartyCandidateNomination ProbabilityElection ProbabilityElectability
DemocratClinton70.54463
DemocratSanders28.519.568
RepublicanBush8.53.541
RepublicanCruz13.55.40
RepublicanRubio32.51546
RepublicanTrump47.5
29.562

As before, these are numbers from PredictIt, which is a betting market where you can go wager real money.

If you use numbers from PredictWise, they look quite different:

PartyCandidateNomination ProbabilityElection ProbabilityElectability
DemocratClinton845363
DemocratSanders16850
RepublicanBush7343
RepublicanCruz8225
RepublicanRubio321341
RepublicanTrump511835

PredictWise aggregates information from various sources, including multiple betting markets as well as polling data. I don’t know which one is better. I do know that if you think PredictIt is wrong about any of these numbers, then you can go there and place a bet. Since PredictWise is an aggregate, there’s no correspondingly obvious way to make money off of it. If you do think the PredictWise numbers are way off, then it’s probably worth looking around at the various betting markets to see if there are bets you should be making: since PredictWise got its values in large part from these markets, there may be.

To me, the most interesting numbers are Trump’s. Many of my lefty friends are salivating over the prospect of his getting the nomination, because they think he’s unelectable. PredictIt disagrees, but PredictWise agrees. I don’t know what to make of that, but it remains true that, if you’re confident Trump is unelectable, you have a chance to make some money over on PredictIt.

My old friend John Stalker, who is an extremely smart guy, made a comment on my previous post that’s worth reading. He raises one technical issue and one broader issue.

The technical point is that whether you can make money off of these bets depends on the bid-ask spread (that is, the difference in prices to buy or sell contracts). That’s quite right.  I would add that you should also consider the opportunity cost: if you make these bets, you’re tying up your money until August (for bets on the nomination) or November (for bets on the general election). In deciding whether a bet is worthwhile, you should compare it to whatever investment you would otherwise have made with that money.

John’s broader claim is that “electability” as that term is generally understood in this context means something different from the conditional probabilities I’m calculating:

I suspect that by the term “electability” most people mean the candidate’s chances of success in the general election assuming voters’ current perceptions of them remain unchanged, rather than their chances in a world where those views have changed enough for them to have won the primary.

You should read the rest yourself.

I think that I disagree, at least for the purposes that I’m primarily interested in. As I mentioned, I’m thinking about my friends who hope that Trump gets the nomination because it’ll sweep a Democrat into the White House. I think that they mean (or at least, they should mean) precisely the conditional probability I’ve calculated. I think that they’re claiming that a world in which Trump gets the nomination (with whatever other events or changes go along with that) is a world in which the Democrat wins the Presidency. That’s what my conditional probabilities are about.

But as I said, John’s an extremely smart guy, so maybe he’s right and I’m wrong.

Horgan on Bayes

John Horgan has a piece at Scientific American‘s site entitled “Bayes’s Theorem: What’s the Big Deal?” The article’s conceit is that, after hearing people touting Bayesian reasoning to him for many years, he finally decided to learn what it was all about and explain it to his readers.

His explanation is not bad at first. He gets a lot of it from this piece by Eliezer Yudkowsky, which is very good but very long. (It does have jokes sprinkled through it, so keep reading!) Both Yudkowsky and Horgan emphasize that Bayes’s theorem is actually rather obvious. Horgan:

This example [of the probability of false positives in medical tests] suggests that the Bayesians are right: the world would indeed be a better place if more people—or at least more health-care consumers and providers–adopted Bayesian reasoning.

On the other hand, Bayes’ theorem is just a codification of common sense. As Yudkowsky writes toward the end of his tutorial: “By this point, Bayes’ theorem may seem blatantly obvious or even tautological, rather than exciting and new. If so, this introduction has entirely succeeded in its purpose.”

That’s right! Bayesian reasoning is simply the (unique) correct way to reason quantitatively about probabilities, in situations where the experimental evidence doesn’t let you draw conclusions with mathematical certainty (i.e., pretty much all situations).

Unfortunately, Horgan eventually goes off the rails:

The potential for Bayes abuse begins with P(B), your initial estimate of the probability of your belief, often called the “prior.” In the cancer-test example above, we were given a nice, precise prior of one percent, or .01, for the prevalence of cancer. In the real world, experts disagree over how to diagnose and count cancers. Your prior will often consist of a range of probabilities rather than a single number.

In many cases, estimating the prior is just guesswork, allowing subjective factors to creep into your calculations. You might be guessing the probability of something that–unlike cancer—does not even exist, such as strings, multiverses, inflation or God. You might then cite dubious evidence to support your dubious belief. In this way, Bayes’ theorem can promote pseudoscience and superstition as well as reason.

The problem he’s talking about is, to use a cliche, not a bug but a feature. When the evidence doesn’t prove, with mathematical certainty, whether a statement is true or false (i.e., pretty much always), your conclusions must depend on your subjective assessment of the prior probability. To expect the evidence to do more than that is to expect the impossible.

In the example Horgan is using, suppose that a cancer test is given with known rates of false positives and false negatives. The patient tests positive. In order to interpret that result and decide how likely the patient is to have cancer, you need a prior probability. If you don’t have one based on data from prior studies, you have to use a subjective one.

The doctor and patient in such a situation will, inevitably, decide what to do next based on some combination of the test result and their subjective prior probabilities. The only choice they have is whether do it unconsciously or consciously.

The second paragraph quoted above is simply nonsense. If you apply Bayesian reasoning to any of those things that may or may not exist, you will reach conclusions that combine your prior belief with the evidence. I have no idea in what sense doing this “promote[s] pseudoscience.” More importantly, I have no idea what alternative Horgan would have us choose.

Here’s the worst part of the piece:

Embedded in Bayes’ theorem is a moral message: If you aren’t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe. Scientists often fail to heed this dictum, which helps explains why so many scientific claims turn out to be erroneous. Bayesians claim that their methods can help scientists overcome confirmation bias and produce more reliable results, but I have my doubts.

Horgan doesn’t cite any examples of erroneous claims that can be blamed on Bayesian reasoning. In fact, this statement seems to me to be nearly the exact opposite of the truth.

There’s been a lot  angst in the past few years about non-replicable scientific findings. One of the main contributors to this problem, as far as I can tell, is that scientists are not using Bayesian reasoning: they are interpreting p-values as if they told us whether various hypotheses are true or not, without folding in any prior information.