{"id":824,"date":"2014-02-18T17:53:09","date_gmt":"2014-02-18T22:53:09","guid":{"rendered":"http:\/\/blog.richmond.edu\/physicsbunn\/?p=824"},"modified":"2014-02-18T17:59:36","modified_gmt":"2014-02-18T22:59:36","slug":"nature-on-p-values","status":"publish","type":"post","link":"https:\/\/blog.richmond.edu\/physicsbunn\/2014\/02\/18\/nature-on-p-values\/","title":{"rendered":"Nature on p-values"},"content":{"rendered":"<p><em>Nature<\/em> has a depressing\u00a0<a href=\"http:\/\/www.nature.com\/news\/scientific-method-statistical-errors-1.14700\">piece<\/a> about how to interpret<em> p<\/em>-values (i.e., the numbers people generally use to describe &#8220;statistical significance&#8221;). What&#8217;s depressing about it? Sentences like this:<\/p>\n<blockquote><p>Most scientists would look at his original P value of 0.01 and say that there was just a 1% chance of his result being a false alarm.<\/p><\/blockquote>\n<p>If it&#8217;s really true that &#8220;most scientists&#8221; think this, then we&#8217;re in deep trouble.<\/p>\n<p>Anyway, the article goes on to give a good explanation of why this is wrong:<\/p>\n<blockquote><p>But they would be wrong. The <em>P<\/em> value cannot say this: all it can do is summarize the data assuming a specific null hypothesis. It cannot work backwards and make statements about the underlying reality. That requires another piece of information: the odds that a real effect was there in the first place. To ignore this would be like waking up with a headache and concluding that you have a rare brain tumour \u2014 possible, but so unlikely that it requires a lot more evidence to supersede an everyday explanation such as an allergic reaction. The more implausible the hypothesis \u2014 telepathy, aliens, homeopathy \u2014 the greater the chance that an exciting finding is a false alarm, no matter what the <em>P<\/em> value is.<\/p><\/blockquote>\n<p>The main point here is the standard workhorse idea of Bayesian statistics: the experimental evidence gives you a recipe for\u00a0<em>updating<\/em> your\u00a0<em>prior beliefs<\/em> about the probability that any given statement about the world is true. The evidence alone does not tell you the probability that a hypothesis is true. It cannot do so without folding in a prior.<\/p>\n<p>To rehash the old, standard example, suppose that you take a test to see if you have <a href=\"http:\/\/en.wikipedia.org\/wiki\/Kuru_(disease)\">kuru<\/a>. The test gives the right answer 99% of the time. You test positive. That test &#8220;rules out&#8221; the null hypothesis that you&#8217;re disease-free with a <em>p<\/em>-value of 1%. But that doesn&#8217;t mean there&#8217;s a 99% chance you have the disease. The reason is that the prior probability that you have kuru is very low. Say one person in 100,000 has the disease. When you test 100,000 people, you&#8217;ll get \u00a0roughly one true positive and 1000 false positives. Your positive test is overwhelmingly likely to be one of the false ones, low <em>p<\/em>-value notwithstanding.<\/p>\n<p>For some reason, people regard &#8220;Bayesian statistics&#8221; as something controversial and heterodox. Maybe they wouldn&#8217;t think so if it were simply called &#8220;correct reasoning,&#8221; which is all it is.<\/p>\n<p>You don&#8217;t have to think of yourself as &#8220;a Bayesian&#8221; to interpret <em>p<\/em>-values in the correct way. Standard statistics textbooks all state clearly that a <em>p<\/em>-value is not the probability that a hypothesis is true, but rather the probability that, if the null hypothesis is true, a result as extreme as the one actually found would occur.<\/p>\n<p>Here&#8217;s a convenient Venn diagram to help you remember this:<\/p>\n<p style=\"text-align: center\"><a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/files\/2014\/02\/Slide11.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter  wp-image-826\" src=\"http:\/\/blog.richmond.edu\/physicsbunn\/files\/2014\/02\/Slide11.gif\" alt=\"\" width=\"393\" height=\"232\" srcset=\"https:\/\/blog.richmond.edu\/physicsbunn\/files\/2014\/02\/Slide11.gif 491w, https:\/\/blog.richmond.edu\/physicsbunn\/files\/2014\/02\/Slide11-300x177.gif 300w\" sizes=\"auto, (max-width: 393px) 100vw, 393px\" \/><\/a><\/p>\n<p>(Confession: this picture is a <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2013\/06\/16\/the-bayes-wars-in-science\/\">rerun<\/a>.)<\/p>\n<p>If <em>Nature<\/em>&#8216;s readers really don&#8217;t know this, then something&#8217;s seriously wrong with the way we train scientists.<\/p>\n<p>Anyway, there&#8217;s a bunch of good stuff in this article:<\/p>\n<blockquote><p>The irony is that when UK statistician Ronald Fisher introduced the P value in the 1920s, he did not mean it to be a definitive test. He intended it simply as an informal way to judge whether evidence was significant in the old-fashioned sense: worthy of a second look.<\/p><\/blockquote>\n<p>Fisher&#8217;s got this exactly right. The standard in many fields for &#8220;statistical significance&#8221; is a <em>p<\/em>-value of 0.05. Unless you set the value far, far lower than that, a very large number of &#8220;significant&#8221; results are going to be false. That doesn&#8217;t necessarily mean that you shouldn&#8217;t use <em>p<\/em>-values. It just means that you should regard them (particularly with this easy-to-cross 0.05 threshold) as ways to decide which hypotheses to investigate further.<\/p>\n<p>Another really important point:<\/p>\n<blockquote><p>Perhaps the worst fallacy is the kind of self-deception for which psychologist Uri Simonsohn of the University of Pennsylvania and his colleagues have popularized the term <em>P<\/em>-hacking; it is also known as data-dredging, snooping, fishing, significance-chasing and double-dipping. \u201c<em>P<\/em>-hacking,\u201d says Simonsohn, \u201cis trying multiple things until you get the desired result\u201d \u2014 even unconsciously.<\/p><\/blockquote>\n<p>I didn&#8217;t know the term <em>P<\/em>-hacking, although I&#8217;d heard some of the others. Anyway, it&#8217;s a sure-fire way to generate significant-looking but utterly false results, and it&#8217;s unfortunately not at all unusual.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Nature has a depressing\u00a0piece about how to interpret p-values (i.e., the numbers people generally use to describe &#8220;statistical significance&#8221;). What&#8217;s depressing about it? Sentences like this: Most scientists would look at his original P value of 0.01 and say that there was just a 1% chance of his result being a false alarm. If it&#8217;s &hellip; <a href=\"https:\/\/blog.richmond.edu\/physicsbunn\/2014\/02\/18\/nature-on-p-values\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Nature on p-values<\/span><\/a><\/p>\n","protected":false},"author":12,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-824","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts\/824","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/users\/12"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/comments?post=824"}],"version-history":[{"count":0,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts\/824\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/media?parent=824"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/categories?post=824"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/tags?post=824"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}