{"id":1056,"date":"2016-01-24T18:42:10","date_gmt":"2016-01-24T23:42:10","guid":{"rendered":"http:\/\/blog.richmond.edu\/physicsbunn\/?p=1056"},"modified":"2016-01-24T18:42:10","modified_gmt":"2016-01-24T23:42:10","slug":"horgan-on-bayes","status":"publish","type":"post","link":"https:\/\/blog.richmond.edu\/physicsbunn\/2016\/01\/24\/horgan-on-bayes\/","title":{"rendered":"Horgan on Bayes"},"content":{"rendered":"<p>John Horgan has a <a href=\"http:\/\/blogs.scientificamerican.com\/cross-check\/bayes-s-theorem-what-s-the-big-deal\/\">piece<\/a>\u00a0at <em>Scientific American<\/em>&#8216;s site\u00a0entitled &#8220;Bayes&#8217;s Theorem: What&#8217;s the Big Deal?&#8221; The article&#8217;s conceit is that, after hearing people touting Bayesian reasoning to him for many years, he finally decided to learn what it was all about and explain it to his readers.<\/p>\n<p>His explanation is not bad at first. He gets a lot of it from this <a href=\"http:\/\/www.yudkowsky.net\/rational\/bayes\">piece<\/a> by Eliezer Yudkowsky, which is very good but very long. (It does have jokes sprinkled through it, so keep reading!) Both Yudkowsky and Horgan emphasize that\u00a0Bayes&#8217;s theorem is actually rather obvious. Horgan:<\/p>\n<blockquote><p>This example [of the probability of false positives in medical tests] suggests that the Bayesians are right: the world would indeed be a better place if more people\u2014or at least more health-care consumers and providers&#8211;adopted Bayesian reasoning.<\/p>\n<p>On the other hand, Bayes\u2019 theorem is just a codification of common sense. As Yudkowsky writes toward the end of his tutorial: \u201cBy this point, Bayes&#8217; theorem may seem blatantly obvious or even tautological, rather than exciting and new. If so, this introduction has entirely succeeded in its purpose.\u201d<\/p><\/blockquote>\n<p>That&#8217;s right! Bayesian reasoning is simply the (unique) correct way to reason quantitatively about probabilities, in situations where the experimental evidence doesn&#8217;t let you draw conclusions with mathematical certainty (i.e., pretty much all situations).<\/p>\n<p>Unfortunately, Horgan eventually goes off the rails:<\/p>\n<blockquote><p>The potential for Bayes abuse begins with P(B), your initial estimate of the probability of your belief, often called the \u201cprior.\u201d In the cancer-test example above, we were given a nice, precise prior of one percent, or .01, for the prevalence of cancer. In the real world, experts disagree over how to diagnose and count cancers. Your prior will often consist of a range of probabilities rather than a single number.<\/p>\n<p>In many cases, estimating the prior is just guesswork, allowing subjective factors to creep into your calculations. You might be guessing the probability of something that&#8211;unlike cancer\u2014does not even exist, such as strings, multiverses, inflation or God. You might then cite dubious evidence to support your dubious belief. In this way, Bayes\u2019 theorem can promote pseudoscience and superstition as well as reason.<\/p><\/blockquote>\n<p>The problem he&#8217;s talking about is, to use a cliche, not a bug but a feature. When the evidence doesn&#8217;t prove, with mathematical certainty, whether a statement is true or false (i.e., pretty much always), your conclusions\u00a0<em>must<\/em> depend on your subjective assessment of the prior probability. To expect the evidence to do more than that is to expect the impossible.<\/p>\n<p>In the example Horgan is using, suppose that a cancer test is given with known rates of false positives and false negatives. The patient tests positive. In order to interpret that result and decide how likely the patient is to have cancer, you need a prior probability. If you don&#8217;t have one based on data from prior studies, you have to use a subjective one.<\/p>\n<p>The doctor and patient in such a situation will, inevitably, decide what to do next based on some combination of the test result and their subjective prior probabilities. The\u00a0only choice they have is whether do it\u00a0unconsciously or consciously.<\/p>\n<p>The second paragraph quoted above is simply nonsense. If you apply Bayesian reasoning to any of those things that may or may not exist, you will reach conclusions that combine\u00a0your prior belief with the evidence. I have no idea\u00a0in what sense doing this &#8220;promote[s] pseudoscience.&#8221; More importantly, I have no idea what alternative Horgan would have us choose.<\/p>\n<p>Here&#8217;s the worst part of the piece:<\/p>\n<blockquote><p>Embedded in Bayes\u2019 theorem is a moral message: If you aren\u2019t scrupulous in seeking alternative explanations for your evidence, the evidence will just confirm what you already believe. Scientists often fail to heed this dictum, which helps explains why so many scientific claims turn out to be erroneous. Bayesians claim that their methods can help scientists overcome confirmation bias and produce more reliable results, but I have my doubts.<\/p><\/blockquote>\n<p>Horgan doesn&#8217;t cite any examples of erroneous claims that can be blamed on Bayesian reasoning. In fact, this statement seems to me to be nearly the exact opposite of the truth.<\/p>\n<p>There&#8217;s been a lot \u00a0angst in the past few years about non-replicable scientific findings. One of the main contributors to this problem, <a href=\"https:\/\/blog.richmond.edu\/physicsbunn\/2015\/03\/09\/p-values-arent-wrong-theyre-just-uninteresting\/\">as far as I can tell<\/a>, is that scientists are\u00a0<em>not<\/em> using Bayesian reasoning: they are interpreting <em>p<\/em>-values as if they told us whether various hypotheses are true or not, without folding in any prior information.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>John Horgan has a piece\u00a0at Scientific American&#8216;s site\u00a0entitled &#8220;Bayes&#8217;s Theorem: What&#8217;s the Big Deal?&#8221; The article&#8217;s conceit is that, after hearing people touting Bayesian reasoning to him for many years, he finally decided to learn what it was all about and explain it to his readers. His explanation is not bad at first. He gets &hellip; <a href=\"https:\/\/blog.richmond.edu\/physicsbunn\/2016\/01\/24\/horgan-on-bayes\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Horgan on Bayes<\/span><\/a><\/p>\n","protected":false},"author":12,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1056","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts\/1056","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/users\/12"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/comments?post=1056"}],"version-history":[{"count":0,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts\/1056\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/media?parent=1056"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/categories?post=1056"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/tags?post=1056"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}