الجمعة، 18 يناير 2013

Taking science reporting with a grain of salt


The chart above displays a history of studies of health effects of omega3 (fish oil) supplements, from early enthusiasm re a health benefit to later evidence of no such effect, explained as follows:
For a small study (such as Sacks’ and Leng’s early work in the top two rows of the table) to get published, it needs to show a big effect — no one is interested in a small study that found nothing. It is likely that many other small studies of fish oil pills were conducted at the same time of Sacks’ and Leng’s, found no benefit and were therefore not published. But by the play of chance, it was only a matter of time before a small study found what looked like a big enough effect to warrant publication in a journal editor’s eyes.

At that point in the scientific discovery process, people start to believe the finding, and null effects thus become publishable because they overturn "what we know". And the new studies are larger, because now the area seems promising and big research grants become attainable for researchers. Much of the time, these larger and hence more reliable studies cut the "miracle cure" down to size. 
The take-home point here is not about fish oils, but about any report that product XYZ is good for (or bad for) your health or well-being.  The mainstream media (and the 'net) is full of such stuff.  It's also important to ascertain who funded a given study.  If the National Association of WidgetMakers funds seven studies, six may be inconclusive or negative, and the seventh may show a benefit, and only the last will get publicized.

Graph and text via The Dish.

ليست هناك تعليقات:

إرسال تعليق