As I keep mentioning ad nauseam, I think probability’s really important in understanding all sorts of things about science. Here’s a really basic question that’s easy to ask but maybe not so easy to answer: What do probabilities mean anyway?
Not surprisingly, this is a question that philosophers have taken a lot of interest in. Here’s a nice article reviewing a bunch of interpretations of probability. The article lists a bunch of different interpretations, but ultimately I think the most important distinction to draw is between objective and subjective interpretations: Are probabilities statements about the world, or are they statements about our knowledge and belief about the world?
The poster child for the objective interpretation is frequentism, and the main buzzword associated with the subjective interpretation is Bayesianism. If you’re a frequentist, you think that probabilities are descriptions of the frequency with which a certain outcome will occur in many repeated trials of an experiment. If you want to talk about thing that don’t have many repeated trials (e.g., a weatherman who wants to say something about the chance of rain tomorrow, or a bookie who wants to set odds for the next Super Bowl), then you have to be willing to imagine many hypothetical repeated trials.
The frequentist point of view strikes me as so utterly bizarre that I can’t understand why anyone takes it seriously. Suppose that I say that the Red Sox have a 20% chance of winning the World Series this year. Can anyone really believe that I’m making a statement about a large (in principle infinite) number of imaginary worlds, in some of which the Red Sox win and in others of which they lose? And that if Daisuke Matsuzaka breaks his arm tomorrow, something happens to a bunch of those hypothetical worlds, changing the relative numbers of winning and losing worlds? These sound utterly crazy to me, but as far as I can tell, frequentists really believe that that’s what probabilities are all about.
It seems completely obvious to me that, when I say that the Red Sox have a 20% chance of winning the World Series, I’m describing my state of mind (my beliefs, knowledge, etc.), not something about a whole bunch of hypothetical worlds. That makes me a hardcore Bayesian.
So here’s what I’m wondering. Lots of smart people seem to believe in a frequentist idea of probability. Yet to me the whole idea seems not just wrong, but self-evidently absurd. This leads me to think maybe I’m missing something. (Then again, John Baez agrees with me on this, which reassures me.) So here’s a question for anyone who might have read this far: Do you know of anything I can read that would help me understand why frequentism is an idea that even deserves a serious hearing?
9 thoughts on “What is probability?”
Andrew Jaffe argued at the PhyStat’05 conference that cosmologists tend to be Bayesian, particle physicists tend to be frequentists and astronomers tend to be somewhere in between. But personally I wouldn’t play up the Bayes-Freq polemic too much and I think that is reflected in the change of the tone of the monologues we see these days (cf Jaynes’ “Probability Theory” vs Mackay’s “Information theory, inference and learning algorithms”). In Ch. 4 of Gelman et al’s “Bayesian Data Analysis”, they seem to offer some comments on how frequentist statistics can help to understand the properties of bayesian estimators under repeated measurements.
When contrasting Bayesians with frequentists, I think it’s important to distinguish between two entirely different topics:
1. Interpretation of probability
2. Use of statistical techniques
I’m a diehard Bayesian on item #1, but I’m fervently indifferent about #2. I think that frequentism is incoherent and silly as a way of understanding philosophically what probabilities mean (#1), but that doesn’t mean that frequentist techniques in statistics are invalid (#2). On the contrary, a frequentist confidence interval is a perfectly valid answer to a particular question, and a Bayesian credible region is a perfectly valid answer to a different question.
I cowrote a paper once analyzing a particular data set from both Bayesian and frequentist points of view, to highlight the similarities and differences (http://arxiv.org/abs/astro-ph/0111010).
Sorry for wandering in on this thread at such a late stage, I just found it because I was wondering about a problem which I’ve been scratching my head about.
The question is, in quantum mechanics, how does one describe – particularly to an undergraduate! – the physical meaning of a relation such as
*without* invoking any sort of frequentist scenario? (I’m sure this must have been done to death in the literature if you can just point me in the right direction please do!)
That should have read
bra(phi) ket(psi) = 0.5
That’s a good question. As you’d expect, a lot has been written on the meaning of probability in quantum mechanics. I read up on it a bit a long time ago, but it’s all pretty hazy in my mind right now. If I were to start looking into this again (which I certainly can’t do this week), I’d probably start by reminding myself what John Baez has to say on the subject at
but I don’t remember if it really gets into the question you’re asking.
Thanks very much Ted, I’ll chase that up. Apologies if I came across as chasing you for an answer, it is indeed a very busy time of year!
Want the headache of all time? I apologize for this in advance. I’ve been defending Bayesian concepts of probability for the past two years without knowing it. In the past month a group of dedicated and opinionated Roulette players were dragged kicking and screaming to the point of acknowledging the concept of Bayesian type probability. We had our little war with the mathboyz, the math Nazis. They lost once the two concepts were put up side by side, and in a language that we understood. That language was the language of a Roulette discussion forum.
So here is the headache: Remember Edward Thorp and ‘Beat the Dealer?’ He changed things over showing an advantage. Here is another one. You can beat randomness. Randomness in Roulette has characteristics. You discover these characteristics while charting their current state in the form of parallel results for each spin. They relate to the concept of now. You can guess the now state. You should find that the characteristics of randomness include dominance, trends, patterns, and global effects. It’s possible to make an educated guess at these characteristics, to establish a line of effectiveness. Effectiveness exists in three states, works very well, flat lines as temporarily indecisive of any state, and works very badly. Works very badly can be recognized so that the educated guess may be selected as the opposite bet selection, thus turning around the effectiveness state. Playing experience is the process that turns this into an advantage.
Comments are closed.