{"id":979,"date":"2015-06-17T17:29:48","date_gmt":"2015-06-17T22:29:48","guid":{"rendered":"http:\/\/blog.richmond.edu\/physicsbunn\/?p=979"},"modified":"2015-06-17T17:29:48","modified_gmt":"2015-06-17T22:29:48","slug":"that-fake-chocolate-weight-loss-study","status":"publish","type":"post","link":"https:\/\/blog.richmond.edu\/physicsbunn\/2015\/06\/17\/that-fake-chocolate-weight-loss-study\/","title":{"rendered":"That fake chocolate weight-loss study"},"content":{"rendered":"<p>I&#8217;m a little late getting to this, but in case you haven&#8217;t heard about it, here it is.<\/p>\n<p>A journalist named John\u00a0Bohannon did a stunt recently in which he and some coauthors &#8220;published&#8221; a &#8220;study&#8221; that &#8220;showed&#8221; that chocolate caused weight loss. (The reasons for the scare quotes will become apparent.) The work was picked up by a bunch of news outlets. Bohannon wrote about the whole thing on <a href=\"http:\/\/io9.com\/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800#\">io9<\/a>. It&#8217;s also been picked up in a bunch of other places, including the BBC radio program <a href=\"http:\/\/www.bbc.co.uk\/programmes\/p02t0m81\"><em>More or Less<\/em> <\/a>(which <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2009\/04\/28\/quantitative-thinking-and-environmentalism\/\">I&#8217;ve<\/a> <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2012\/05\/13\/why-astrophysicists-should-measure-immigration-delays-at-heathrow\/\">mentioned<\/a> <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2013\/07\/10\/before-asking-why-ask-whether\/\">before<\/a> <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2012\/06\/22\/kahneman-on-taxis\/\">a<\/a> <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2013\/01\/11\/does-chocolate-cause-nobel-prizes\/\">few times<\/a>).<\/p>\n<p>The idea was to do a study that was shoddy in precisely the ways that many &#8220;real&#8221; studies are, get it published in a low-quality journal, and see if they could get it picked up by credulous journalists.<\/p>\n<blockquote><p>My colleagues and I recruited actual human subjects in Germany. We ran an actual clinical trial, with subjects randomly assigned to different diet regimes. And the statistically significant benefits of chocolate that we reported are based on the actual data. It was, in fact, a fairly typical study for the field of diet research. Which is to say: It was terrible science. The results are meaningless, and the health claims that the media blasted out to millions of people around the world are utterly unfounded.<\/p><\/blockquote>\n<p>There is an interesting question of journalistic ethics here. Bohannon calls himself a journalist, but he deliberately introduced bad science into the mediasphere with the specific intent of deception. Is it OK for a journalist to do that, if his motives are pure? I don&#8217;t know.<\/p>\n<p>I don&#8217;t want to focus on that sort of issue, because I don&#8217;t have anything non-obvious to say. Instead, I want to dig a bit into the details of what Bohannon <em>et al.<\/em> did. Although Bohannon&#8217;s io9 post is well worth reading and gets the big picture largerly right, it&#8217;s wrong or misleading in a few ways, which happen to be the sort of thing I care about.<\/p>\n<p>Bohannon <em>et al.<\/em>\u00a0\u00a0recruited a group of subjects and divided them into three groups: a control group, a group that was put on a low-carb diet, and a group that was put on a low-carb diet but also told to eat a certain amount of chocolate each day. The chocolate group lost weight faster than the other groups. The result was &#8220;statistically significant,&#8221; in the usual meaning of that term &#8212; the <em>p<\/em>-value was below 0.05.<\/p>\n<p>So what was wrong with this study? As Bohannon explains it in his io9 post,<\/p>\n<blockquote><p>Here\u2019s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a \u201cstatistically significant\u201d result. Our study included 18 different measurements\u2014weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.\u2014from 15 people. (One subject was dropped.) That study design is a recipe for false positives.<\/p>\n<p>&#8230;<\/p>\n<p>The conventional cutoff for being \u201csignificant\u201d is 0.05, which means that there is just a 5 percent chance that your result is a random fluctuation. The more lottery tickets, the better your chances of getting a false positive. So how many tickets do you need to buy?<\/p>\n<p><em>P<\/em>(winning) = 1 &#8211; (1 &#8211; <em>p<\/em>)<sup><em>n<\/em><\/sup><\/p>\n<p>With our 18 measurements, we had a 60% chance of getting some\u201csignificant\u201d result with <em>p<\/em> &lt; 0.05. (The measurements weren\u2019t independent, so it could be even higher.) The game was stacked in our favor.<\/p>\n<p>It\u2019s called<em> p<\/em>-hacking\u2014fiddling with your experimental design and data to push <em>p<\/em> under 0.05\u2014and it\u2019s a big problem. Most scientists are honest and do it unconsciously. They get negative results, convince themselves they goofed, and repeat the experiment until it \u201cworks.\u201d Or they drop \u201coutlier\u201d data points.<\/p><\/blockquote>\n<p>Sadly, even in this piece, whose purpose is to debunk bad statistics, Bohannon repeats the usual incredibly common error. A <em>p<\/em>-value of 0.05 does not mean that &#8220;there is just a 5 percent chance that your result is a random fluctuation.&#8221; It means that, if you assume that nothing but random fluctuations are at work, there&#8217;s a 5% chance of getting results as extreme as you did. A (frequentist) <em>p<\/em>-value is <a href=\"http:\/\/blog.richmond.edu\/physicsbunn\/2013\/06\/16\/the-bayes-wars-in-science\/\">incapable<\/a> of telling you anything about the probability of\u00a0any given hypothesis (such as &#8220;your result is a random fluctuation&#8221;).<\/p>\n<p>One other quibble: the parenthetical remark about the measurements not being independent is literally true but misleading. The fact that the measurements aren&#8217;t independent means that the probability of a false positive &#8220;could be even higher&#8221;, but it could also be lower. In fact, the latter seems more likely to me. (The probability goes down if the measurements are positively correlated with each other, and up if they&#8217;re anticorrelated.)<\/p>\n<p>The other thing that&#8217;s worth focusing\u00a0on is the number of subjects in the study, which\u00a0was incredibly small (15 across all three groups). Bohannon suggests (in the first sentence quoted above) that this is part of the reason they got a false positive, and other pieces I&#8217;ve read on this say the same thing. But it&#8217;s not true. The reason they got a false positive was <em>p<\/em>-hacking (buying many lottery tickets), which would have worked just as well with a larger number of subjects. If you had more subjects, the random fluctuations would have gotten smaller, but the level of fluctuation required for statistical &#8220;significance&#8221; would have gone down as well. By definition, the odds of any one &#8220;lottery ticket&#8221; winning is 5%, whether you have a lot of subjects or a few.<\/p>\n<p>It&#8217;s true that with fewer subjects the effect size (i.e., the number of extra pounds lost, on average) is likely to be larger, but the <a href=\"http:\/\/www.scribd.com\/doc\/266969860\/Chocolate-causes-weight-loss\">published article<\/a> went to great lengths to downplay the effect size (e.g., not mentioning it in the abstract, which is often all anyone reads).<\/p>\n<p>Let me repeat that I think that Bohannon&#8217;s description of what he did is well worth reading and has a lot that&#8217;s right at the macro-scale, even though\u00a0I wish that he&#8217;d gotten the above details right.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m a little late getting to this, but in case you haven&#8217;t heard about it, here it is. A journalist named John\u00a0Bohannon did a stunt recently in which he and some coauthors &#8220;published&#8221; a &#8220;study&#8221; that &#8220;showed&#8221; that chocolate caused weight loss. (The reasons for the scare quotes will become apparent.) The work was picked &hellip; <a href=\"https:\/\/blog.richmond.edu\/physicsbunn\/2015\/06\/17\/that-fake-chocolate-weight-loss-study\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">That fake chocolate weight-loss study<\/span><\/a><\/p>\n","protected":false},"author":12,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-979","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts\/979","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/users\/12"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/comments?post=979"}],"version-history":[{"count":0,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/posts\/979\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/media?parent=979"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/categories?post=979"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.richmond.edu\/physicsbunn\/wp-json\/wp\/v2\/tags?post=979"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}