A new study has found some scientists are unknowingly tweaking experiments and analysis methods to increase their chances of getting results that are easily published. There is increasing concern that many published results are false positives. Many argue that current scientific practices create strong incentives to publish statistically significant (i.e., “positive”) results, and there is good evidence that journals, especially prestigious ones with higher impact factors, disproportionately publish statistically significant results.
Emma Granqvist, a Publisher for plant sciences with Elsevier, says why science needs to publish negative results; she says it is a necessity to reduce the positive bias in the scientific literature. There are few journals which are dedicated to publish negative results, for example, New Negatives in Plant Science, Journal of Negative Results in Biomedicine and Journal of Negative Results.
In the recent years, there has been a lot of debate over peer-review and how it never ends, even after getting published (See Pubpeer). A paper was published earlier this year titled “Attention decay in science“, which talks about how science is drowning in just too many studies. For the sake of publishing and acquiring more grants, researchers seem to just be publishing too many studies. The exponential growth in the number of scientific papers is making it increasingly difficult for researchers to keep track of all the publications relevant to their work.
The study conducted by ANU scientists is the most comprehensive investigation into a type of publication bias called p-hacking. P-hacking refers to researchers when they either consciously or unconsciously analyse their data multiple times or in multiple ways until they get a desired result. If p-hacking is common, the exaggerated results could lead to misleading conclusions, even when evidence comes from multiple studies.
“We found evidence that p-hacking is happening throughout the life sciences,” said lead author Dr Megan Head from the ANU Research School of Biology. The study used text mining to extract p-values – a number that indicates how likely it is that a result occurs by chance – from more than 100,000 research papers published around the world, spanning many scientific disciplines, including medicine, biology and psychology.
“Many researchers are not aware that certain methods could make some results seem more important than they are. They are just genuinely excited about finding something new and interesting,” Dr. Head said. “I think that pressure to publish is one factor driving this bias. As scientists we are judged by how many publications we have and the quality of the scientific journals they go in. Journals, especially the top journals, are more likely to publish experiments with new, interesting results, creating incentive to produce results on demand.”
Dr. Head said the study found a high number of p-values that were only just over the traditional threshold that most scientists call statistically significant. The concern with p-hacking is that it could get in the way of forming accurate scientific conclusions, even when scientists review the evidence by combining results from multiple studies.
This seems to be of great concern in the life science field. The original study can be accessed here.