Here’s an interesting paper published in the free-access journal PLoS ONE, discussing the growing pressures on scientific objectivity:

The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments.

The author (Daniele Fanelli from the University of Edinburgh, Scotland) introduces my definite word-of-the-week: HARKing (Hypothesizing After the Results are Known). Anyhow, Fanelli analysed 1316 scientific papers from the United States to determine the percentage of ‘positive’ results (i.e. supporting the hypothesis) vs negative (null) results. Interestingly, the percentage of positive results varied considerably between the states (between 25-100%):

ercentage and 95% logit-derived confidence interval of papers published between 2000 and 2007 that supported a tested hypothesis, classified by the corresponding author's US state (sample size for each state is in parentheses).

Seemingly, >90% is a pretty impressive ‘positive’ rate (NC sits somewhere towards the upper end – good effort JB!). Interestingly though, papers were:

…more likely to support a tested hypothesis if their corresponding authors were working in states that produced more academic papers per capita.

“Positive” results by per-capita R&D expenditure in academia.

So where did all the non-results go? This doesn’t necessarily imply that all results are positive as they are made up, but the lack of reporting of negative results (stuff that simply didn’t work, or wasn’t worth writing about) is surprising:

What happened to the missing negative results? As explained in the Introduction, presumably they either went completely unpublished or were somehow turned into positive through selective reporting, post-hoc re-interpretation, and alteration of methods, analyses and data.

So what does this all mean? Fanelli concludes that:

..these results support the hypothesis that competitive academic environments increase not only scientists’ productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.

Interesting.

 

One Response to Do Pressures to Publish Increase Scientists' Bias?

  1. Allen Chen says:

    The other hypothesis of these missing “negative” results are they are NOT allowed to be published! Because the “positive” gangs will try to their best to put on even “negative” referee reports-:), so they won’t be accepted by the journals, even people try to publish the “negative” results.

    The good example for coral research is the “hybridisation” story. DO we see any papers published in the last 13 years saying there is little chance for corals to hybridise at ecological scale? the answer is of course NO! But do we have the evidence showing corals do hybridise in the field? the answer is also NO. So, we JUST believe corals will do anyway…

    that’s life, isn’t it?

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Set your Twitter account name in your settings to use the TwitterBar Section.