What is a solid?

Potter Stewart would say “I know it when I see it,” but would he? The line between a solid and an extremely viscous liquid isn’t necessarily clear.

People sometimes say that glass is a liquid, citing as evidence the fact that the windows in medieval churches are thicker at the bottom, which suggests that the glass in the windows has been gradually flowing down over the centuries. But that appears to be a misunderstanding. From the venerable Usenet Physics FAQ list:

It is sometimes said that glass in very old churches is thicker at the bottom than at the top because glass is a liquid, and so over several centuries it has flowed towards the bottom.  This is not true.  In Mediaeval times panes of glass were often made by the Crown glass process.  A lump of molten glass was rolled, blown, expanded, flattened and finally spun into a disc before being cut into panes.  The sheets were thicker towards the edge of the disc and were usually installed with the heavier side at the bottom.  Other techniques of forming glass panes have been used but it is only the relatively recent float glass processes which have produced good quality flat sheets of glass.

The FAQ entry contains a bunch of references, including this AJP article, for those who want to geek out on this subject.

I was reminded of this when a friend pointed me to the pitch drop experiment at the University of Queensland. Someone put some pitch, which has a viscosity 1011 times that of water, in a funnel in 1930, and they’ve been letting it drip out ever since, at a rate of one drop every 10 years or so.

The next drop is due soon. You can hope to catch it on a live video feed at the site.

Here’s a picture of the experiment:

 

 

And here’s what happens when you hit the pitch with a hammer:

 

I think you could be forgiven for mistaking this stuff for a solid.

 

Calculate or simulate?

A couple of weeks ago, Slate published two oddly similar articles within a few days of each other: Extra Points Are For Losers and The Supreme Court Justice Death Calculator.

You might not initially think the articles have that much in common: one’s about football strategy and one’s about the future of the US Supreme Court. But beneath the surface they’re practically the same: both are calculations of joint probabilities for multiple events.

The football article points out that in a certain situation it’s provably the right strategy for an NFL team to go for a two-point conversion instead of a single extra point. Specifically, if you’re down 14 points near the end of the game, and you score a touchdown, you should go for 2.

You can go to the article for details, but here’s the idea. The decision only matters under the assumption that you’re going to get another touchdown and the other team isn’t going to score. So let’s assume that that’s going to happen. If you take the usual extra point each time, you’re going into overtime (assuming your kicker always gets the extra point, which is roughly true in pro football), and you’ve got a 50-50 shot at a win . On the other hand, if you go for the two-point conversion, either you make it and are guaranteed a win, or you miss it, in which case you go for 2 again the next time and have a chance to throw the game into overtime again. You can check by a straightforward calculation that the second strategy gives you better than even odds of winning (for reasonable estimates of the likelihood of a two-point conversion).

The Supreme Court article contains a slightly macabre app that lets you calculate the probabilities of various combinations of Supreme Court justices dying during the second Obama administration. Want to know the odds that Obama will get to appoint a replacement for one of the conservative justices? One of the liberals? One of each? You can play around to your heart’s content.

The math underlying both of these calculations is exactly the same. It’s pretty much just

P(A and B) = P(A) P(B).

That is, to get the probability that two independent events both occur, you multiply together the probabilities of each one.

Oddly, the two articles take different approaches to calculating the probabilities. The football article just calculates them directly by doing the multiplication, but the Supreme Court article estimates them by simulation. If you ask the app for the probability that both Scalia and Kagan will die, it runs 10,000 simulations of the future and counts up the results.

That’s actually not a very good thing to do, especially if you’re interested in low-probability combinations such as this one. If you ask the app that question repeatedly, the numbers bounce around: 0.32%,  0.26%, 0.27%, 0.41%.

In the guts of the program there must be probabilities for each of the individual events, so you could answer this question by simply multiplying them together. The result wouldn’t jump around like that and would be more precise than the simulation-based one (although not necessarily more accurate — as the article points out, the calculation is based on some assumptions that might not be correct).

You could reduce the scatter by raising the number of simulations, of course, but it’s odd to use simulations in this situation to begin with. Estimating probabilities via simulation is a great tool when it’s impossible or difficult to calculate the probabilities exactly, but in a situation like this it’s quicker, simpler, and more precise to just calculate them.

 

Contradictory laws

This whole business with the debt ceiling is generally discussed as a political issue, but I’ve been wondering about it as a purely legal issue.

Here’s the question. People often say that, if Congress doesn’t raise the debt ceiling, the US won’t be able to pay its bills and will be forced to default. But as I understand the law (not very well, that is), that doesn’t seem quite right. If Congress does nothing, then, if I’m not mistaken, the law requires two contradictory things:

  1. The US government will be required to spend the money allocated by Congress.
  2. The US government will be forbidden to execute the mechanism that allows for this spending.

(Let’s follow the President’s lead and leave the platinum-coin option out of the discussion for now.)

When people say that the US will be forced to default in this situation, they’re assuming that the government will obey law 2 and break law 1. Why couldn’t it be the other way around?

Although the question is inspired by the current controversy, I’m curious about the broader legal question: when the law is self-contradictory, is there a legal principle that governs which one takes precedence, or do people get to pick and choose?

Although of course you’d hope that legislatures would avoid making logically incompatible laws, I’d bet that this question has arisen from time to time, in which case it seems to me that there’d be case law laying out a clear legal guideline. Is there?

I know that when it’s a statute vs. the Constitution, the Constitution wins — “supreme law of the land” and all that. But in this case it’s statute vs. statute. (For the sake of argument, let’s assume that both laws are constitutional. I know there’s a 14th-amendment argument about the constitutionality of the debt ceiling, but let’s ignore that for now.)

There’s a famous theorem in symbolic logic of the form

p ^ !p ==> q,

pronounced “p and not-p together imply q.” It says that, once you’ve established both halves of a contradiction, you can logically infer anything you like. (Apparently this is called the paradox of entailment. I was hoping it’d have a cool Latin name like modus ponens, but no such luck.) Maybe the President should use this principle of logic to say that, if the debt ceiling controversy isn’t resolved, he’s allowed to do anything he wants.

 

Does chocolate cause Nobel prizes?

This has been around for a couple of months, but I just discovered it. An article in the New England Journal of Medicine notes a correlation between per capita chocolate consumption and per capita Nobel prizes in various countries:

The article’s clearly meant to be playful, not serious — something I didn’t now NEJM went in for.

This result came to my attention via a report on the BBC program (or rather programme) More or Less, which includes the following:

Eric Cornell, who won the Nobel Prize in Physics in 2001, told Reuters: “I attribute essentially all my success to the very large amount of chocolate that I consume. Personally I feel that milk chocolate makes you stupid… dark chocolate is the way to go. It’s one thing if you want a medicine or chemistry Nobel Prize but if you want a physics Nobel Prize it pretty much has got to be dark chocolate.”

But when More or Less contacted him to elaborate on this comment, he changed his tune.

“I deeply regret the rash remarks I made to the media. We scientists should strive to maintain objective neutrality and refrain from declaring our affiliation either with milk chocolate or with dark chocolate,” he said.

“Now I ask that the media kindly respect my family’s privacy in this difficult time.”

The program goes on to talk about the actual lesson here, which is our old friend Correlation Is Not Causation. This takes me back to the glorious moment when I first discovered the existence of xkcd:

 

In the case of the chocolate-Nobel connection, the correlation is highly statistically significant — that is, it’s overwhelmingly unlikely to get such a correlation by chance. That could be explained if chocolate consumption caused Nobel prizes (or vice versa), but it could also be explained if one or more other factors cause increases in both. More or Less explains this pretty well, but oddly doesn’t mention what seems to me to be the most obvious such factor.

Chocolate, despite what you may hear in some quarters, is not a biological necessity but rather a luxury. So you’d expect chocolate consumption to be positively correlated with wealth. And wealthy countries have resources to spend on science, so you’d definitely expect Nobel prize rates to be positively correlated with wealth. So the link between Nobel prizes and chocolate may simply be an artifact of the link between each of these and wealth. I’d bet that that’s all that’s going on here.

Performance art

Nature‘s got a piece up under the headline Duelling visions stall NASA: A US plan to send humans to explore an asteroid is losing momentum.

Summary: President Obama proposes an asteroid as the next major destination for humans in space, but nobody at NASA’s convinced this is (a) feasible or (b) worthwhile.

Here’s the most pro-asteroid part of the article:

But Mark Sykes, president of the Planetary Science Institute in Tucson, Arizona, and chair of NASA’s Small Bodies Assessment Group, remains a big fan of asteroids. He notes that human explorers could search for resources such as water. Scientists could seek to understand the subtle pressure of light that causes asteroids to change their spin, and could retrieve samples for dating and chemical analysis that would offer a clearer picture of Solar System material than do meteorites, which, although they are pieces of asteroids, are altered during their fall through Earth’s atmosphere.

But all this could be done more cheaply with a robotic mission, says Sykes. Without a sustained drive towards something bigger — such as a human presence on Mars — even Sykes isn’t terribly excited. “You go to an asteroid, then what?” he says. “If it’s all performance art, that’s not much of a mission.”

“Search for resources such as water” has got to be the stupidest reason imaginable for going to an asteroid. As Bob Park put it a few years ago, “I told NASA that I would be happy to leave my garden hose out and they could come by and take all the water they want.”

Sykes is quite right that all of the listed reasons would be much better done with robots than people. You don’t send people into space to do science.

There’s one and only one reason for sending humans to any given destination in space: because it’d be awesome to send humans there.Is the intrinsic awesomeness of someone going to an asteroid worth the cost and risk? Then let’s send people there. If not, not.

The word “risk” there is a huge understatement, by the way:

Then there is the problem of just getting there. NASA is increasingly concerned about the radiation exposure and bone loss that astronauts might face during a long voyage outside Earth’s protective magnetosphere. “You get a bad solar storm and you’re toast,” says Mackwell.

As far as I can tell, having humans spend years in zero gravity being bombarded by radiation is virtually certain to ruin their physical health and lead to premature death.

 

A missed opportunity to teach some mathematics

I’m surprised that I missed this scandal in the aftermath of Hurricane Sandy:

To deal with fuel shortages after the storm, New York Mayor Michael Bloomberg introduced rationing on 8 November.

“Drivers in New York City who have licence plates that end in an odd number or end in a letter or other character will be able to gas or diesel only on odd-numbered days such as tomorrow which happens to be the 9th,” he said.

“Those with licence plates ending in an even number, or the number zero, will be able to buy gas or diesel only on even number days such as Saturday November 10th.”

I knew about the rationing. The scandalous part is that Bloomberg uttered the phrase “an even number, or the number zero.” I suppose you could argue that it’s technically correct (as long as the word “or” is inclusive), but it certainly seems to imply that zero is not an even number.

It’s probably true that lots of people don’t know that zero is an even number, so including the clarification makes sense. I just wish Bloomberg had taken the opportunity to educate people a bit by saying “an even number, including zero” instead of “or zero.” He may, inexplicably, not have thought that this was a priority under the circumstances.

By the way, before you use this as a jumping-off point for a jeremiad about the sorry state of American mathematical knowledge, I should point out that this is not an exclusively American problem. From the same BBC report:

It’s not just the public who have struggled to recognise zero as an even number. During the smog in 1977 in Paris, car use was restricted so that people with licence plates ending in odd or even numbers drove on alternate days.

“The police did not know whether to stop the zero-numbered licence plates and so they just let them pass because they didn’t know whether it was odd or even,” says Dr Grime.

 

Joe Palca should be ashamed of himself

for generating this report, and NPR should be ashamed of themselves for running it on their flagship news program Morning Edition. For that matter, so should John Grotzinger, the NASA scientist interviewed in the segment.

Grotzinger says they recently put a soil sample in SAM, and the analysis shows something remarkable. “This data is gonna be one for the history books. It’s looking really good,” he says.

Grotzinger can see the pained look on my face as I wait, hoping he’ll tell me what the heck he’s found, but he’s not providing any more information.

So why doesn’t Grotzinger want to share his exciting news? The main reason is caution. Grotzinger and his team were almost stung once before. When SAM analyzed an air sample, it looked like there was methane in it, and at least here on Earth, some methane comes from living organisms.

Either they’ve got something amazing, in which case it’ll still be amazing once they’ve released it, or they don’t, in which case this is just a bit of hype that does a bit more to erode the credibility of scientists everywhere. There’s no scenario in which this report has any positive effect.

 

Quantum physics and nonlocality

[I just noticed this sitting in the “Drafts” section of my blog. (You can tell it’s old, because I mention something that I’m “going to be teaching in the fall.”) I don’t know why I didn’t post it at the time. Did I notice something wrong with the physics, and put it aside until I fixed it? I don’t think so — it looks right to me.

This is a subject lots of people have written about, and what I say here is just a summary of some standard stuff, but it’s an incredibly cool subject, so if you don’t already know all this, check it out.]

 

Einstein’s big problem with quantum physics was that it involved “spooky action at a distance.”  His most famous quote on quantum physics is about randomness — “God does not play dice” — but in fact he seems to have been much more bothered by nonlocality than randomness.

The sort of nonlocality that bothered Einstein, and that bothers many other people, is kind of hard to explain.  In particular, one key ingredient is a thing called Bell’s Inequality, which is somewhat technical.  There are some very good explanations out there, especially a couple by N.D. Mermin, e.g., one in the American Journal of Physics (paywall) and one that originally appeared in Physics Today.  But recently I came across a nice way of thinking about it in an unexpected place: a puzzle blog.  Even though this way of formulating the result seems to be well known in certain circles (and is apparently close to what Bell wrote in his original paper), I’d never encountered it before.

If you want to understand this stuff, you can read any or all of the above.  But I’m going to try to summarize the main idea here too, mostly because I’m going to be teaching this topic in the fall, and I can use the practice.  So here goes.

Let me start with a quote from the puzzle blog:

Suppose three friends A, B and C take a test with 100 yes-no questions. If you compare the answers given by A and B, 98 of the 100 are the same. Likewise, if you compare the answers given by B and C, again 98 of the 100 are the same. What is the minimum number of questions that A and C have answered in the same way?

The answer, of course, is 96. Now let’s connect this with quantum physics.

In quantum physics,  particles like electrons have a property called “spin.”  The main thing you need to know about electron spin is that when you measure it you always get one of two values, which are usually called “spin-up” and “spin-down.”

Spin-up and spin-down have to be specified relative to some chosen axis: you can pick any direction you want, and measure the spin of an electron along an axis oriented in that direction, and you’ll get either spin-up or spin-down.  (If the axis is horizontal, then the terminology is kind of stupid: it’d make much more sense to call it “spin-left” and “spin-right”.  But we still say “up” and “down” even in this case.)

It’s relatively easy (or so they tell me) to produce pairs of particles that are “entangled,” meaning that their spins are related to each other.  In particular, it’s possible to produce a pair of particles that have the following properties:

  1. There’s no way to predict in advance what the result will be of any spin measurement on either of the particles.
  2. If you measure the spin of one particle, the spin of the other particle, measured about the same axis, is guaranteed to be the same.

(Actually, it’s technically easier to create pairs where the spins are guaranteed to be opposite, but it’s possible to flip one around after the fact, and anyway it’s easier to explain this way.)

So far, there’s nothing “spooky” going on. In fact, these electrons are just like my socks, which  — take my word for it — are not at all spooky.  You don’t know what color my socks are, but knowing what a fastidious dresser I am, you can bet that they match.  So there’s no way to predict the result of one sock-color measurement, but once you’ve measured one you can predict the other with certainty.

(If you actually tried to do this experiment, you’d have a problem: I’m not wearing socks at the moment.  The sock analogy comes from an essay by John Bell, by the way, although I think in his case he imagined someone whose socks always failed to match.)

Einstein believed that the electrons in this system really were just like my socks: my socks have a definite color, even if you don’t know it, and the electrons (according to Einstein) have a definite spin-value even before we’ve measured it.  If we repeat this experiment many times, each pair of electrons will leave the source with a plan in mind about whether to say “up” or “down” when the spin is measured.  That plan may be chosen randomly, but the two particles have the same plan, so of course they give the same answer.  The name for this point of view is “local realism,” by the way.

Bell’s inequality says that local realism is impossible.  To see why, we have to bring in the fact that the spins can be measured about different axes — that is, that we can rotate our measuring apparatus before the electron hits it.  Rule 2 above says what happens when spins of both particles are measured with respect to the same axis: the results agree 100% of the time.  It doesn’t matter what axis we choose, as long as it’s the same for both measurements.  But what if we rotate one measurement apparatus relative to the other?  There’s another rule for that:

3. If the two measurement axes are rotated with respect to each other by an angle x, then the results of the measurements will agree cos2(x/2) of the time.

In particular, say one axis is tilted 16.3 degrees away from the other.  Then that number works out to 0.98.  That is, the results will agree 98% of the time.

Now suppose we have a machine that can mass-produce pairs of entangled particles.  We can imagine doing an experiment in which we measure the spin of either particle about a vertical axis.   (It doesn’t matter which particle: we know they’ll come out the same.)  The series of outcomes of those measurements (up, down, down, up, up, down, down, down, …) is like student B’s test.

Now suppose that instead we measure the first particle’s spin about an axis that’s tilted 16.3 degrees to the left.  The resulting sequence is like student A’s test: it’ll agree with the first list 98% of the time.

Finally, suppose that instead we measure the second particle’s spin about an axis that’s tilted 16.3 degrees to the right.  The results here are (you won’t be surprised to hear) like student C’s test.  Once again, there’s a 98% match with the first list.

Here’s the thing: there’s nothing stopping us from measuring both the  second and third lists simultaneously (particle 1 with an axis tilted to the left, and particle 2 with an axis tilted to the right).  If we do, we find that the results match only 92% of the time.  (That’s rule 3, but with x=32.6 degrees, which is how far apart these two axes are.)  That should be impossible: if both sets of tilted-axis results are 98% correlated with the (hypothetical) vertical-axis results, then they must be at least 96% correlated with each other.

In summary, there is no way to consistently assign values to all three sets of possible measurements (vertical axis, tilted to the left, tilted to the right) that agrees with the probabilities found in quantum mechanics.  In effect, that means that the particle doesn’t “decide” whether it’s going to tell you it’s spin-up or spin-down until you decide which axis you’re going to measure.  And yet the pairs of particles manage to decide the same way, even if they’re very far apart.

By the way, Bell’s inequality was originally a theoretical result.  Rules 1-3 were known to be the predictions made by quantum mechanics, but they hadn’t been tested experimentally at the time.  So originally all you could conclude was that either quantum physics was wrong or local realism was wrong.  But later, the experiment was done, and the quantum physics predictions were confirmed to be correct.  So local realism is wrong, and electrons are not like my socks.

Political innumeracy

A couple of quick pre-election notes:

 

1. If I could outlaw one type of political journalism, it’d be stories of the form “This election will be decided by voters of Type X” (Rural voters, working-class voters, etc.) In past elections, we were told that the election would be decided by soccer moms, NASCAR dads, security moms, and a host of others.

The problem with these statements is not that they’re false — it’s that they’re true but vacuous. What does it mean to say that the election will be decided by young, unmarried women? It means that, if young, unmarried women tend to vote more than expected for a candidate, then that candidate will win. But in a close election, that’s equally true for all groups.

The problem is that “if more people vote for your guy than the other guy, then your guy will win” doesn’t make for a compelling campaign narrative, so the poor beleaguered horserace journalist has to come up with some novel-sounding theory about some particular group that’s going to “decide” the election. Either that or they could find something useful and important to write about.

 

2. One bit of good news from the campaign: Bayesian reasoning has gone mainstream. Look at all the attention Nate Silver‘s been getting.

It won’t surprise anyone who knows me to hear that I love Silver’s approach to political analysis. I’ll take a clear model built on data over 100 pundits’ gut feelings.

A number of Republican-leaning types hate Silver, because his model consistently says Obama’s more likely to win. It’s true that Silver is openly pro-Obama, and it’s true that people have a tendency to (consciously or unconsciously) skew things in the direction they prefer. But there are several reasons one shouldn’t make too much of this argument:

  • Other quantitative, statistics-based models give predictions similar to Silver’s or even more pro-Obama, as do the betting markets. See, for example, the Princeton Election Consortium.
  • Silver’s reputation as a prediction whiz is on the line. His motivation to be accurate is probably stronger than his motivation to boost his guy.
  • Silver laid out his methodology explicitly and publicly quite early on, and as far as I know there’s no evidence he’s changed it.

Of course, Silver’s model might be wrong nonetheless. Some people have said silly things like “We’ll know next week how good his model was,” but of course we won’t. As Ezra Klein put it,

If Mitt Romney wins on election day, it doesn’t mean Silver’s model was wrong. After all, the model has been fluctuating between giving Romney a 25 percent and 40 percent chance of winning the election. That’s a pretty good chance! If you told me I had a 35 percent chance of winning a million dollars tomorrow, I’d be excited. And if I won the money, I wouldn’t turn around and tell you your information was wrong. I’d still have no evidence I’d ever had anything more than a 35 percent chance.

One of the sillier critiques of Silver came from Josh Gerstein of Politico:

Isn’t the basic problem with the Nate Silver prediction in question, and the critique, that it puts a percentage on a one-off event?

Of course we use probabilities to describe one-off events all the time. Does Gerstein listen to weather forecasts?

Then there’s the New York Times’s public editor, Margaret Sullivan, who thinks Silver violated journalistic ethics when he offered to make a bet against Joe Scarborough. Scarborough insists that anyone who thinks the race is anything other than a tossup is an idiot, so Silver offered him an even-money bet on the race (with the loser donating money to charity, so neither stands to gain personally).

Sullivan’s point of view on this strikes me as extremely silly. Offering to bet is a standard rhetorical trick when having arguments about probabilities. If you really believe in your probabilistic statement, you should be willing to use it as the basis of a bet.

I don’t think I’ve ever used vulgar language in this blog before, but here goes: by far the best way to put this point is Alex Tabarrok’s line : A bet is a tax on bullshit. If all pundits who opine about the race had to put bets on their predictions, we’d be a lot better off.

 

I don’t know why there is something rather than nothing, and neither does Stephen Hawking

Over on his excellent blog, In the Dark, Peter Coles quotes Stephen Hawking saying,

Because there is a law such as gravity, the universe can and will create itself from nothing.

He then asks

Huh? I can’t make sense of it at all. Is it just me that finds it entirely devoid of either logic or  meaning?

He has a poll where you can vote on whether the statement is meaningful.

I voted, and then I wrote a comment explaining my vote. Having written it, I figured I might as well throw it up here, so that two or three more people might see it:

I find the intended meaning of the statement tolerably clear: given that there are certain laws of nature, including gravity (among other things such as quantum mechanics), a vacuum state (“nothing”) can and will evolve into a state containing a universe like ours.

That strikes me as meaningful and quite possibly even true. As a piece of science communication to the general public, though, it’s counterproductive. In context, it’s clear that Hawking means to claim this as an answer to hoary old questions of the “why is there something rather than nothing” variety, and it doesn’t do that. If you’re the sort of person who’s inclined to be bothered by questions of that sort, you’ll be just as bothered after understanding this claim as you were before. You’ll just want to know why there was a vacuum state lying around obeying these particular laws of physics.

Similarly, this argument certainly doesn’t prove the non-existence of God, as Hawking seems to be claiming.

Scientists harm our brand when we make overly broad claims about what science can “prove.” Hawking should know better.

Scientists who try to explain things to the general public are on the side of the (secular) angels, but it drives me crazy when they make overly grandiose claims, either about the science itself or about its philosophical interpretation. Every time a scientist does this, it erodes the credibility of the entire profession.