News

Article

Publication Bias—as Old as Science Itself?

Key Takeaways

  • Publication bias favors positive results, influencing clinical decision-making and skewing perceptions of medical interventions' efficacy.
  • Practical reasons, such as career advancement and publication challenges, often deter researchers from publishing negative results.
SHOW MORE

More than 100 years ago, Thomas Alva Edison said, “I have not failed. I’ve just found 10,000 ways that won’t work.” This quote emphasizes the fact that scientific discovery is hard and that scientists may work for years or decades attempting to answer difficult questions through repeated trial and error. The likelihood of negative results greatly outweighs the likelihood of positive results in both basic and clinical research.

New England Journal of Medicine website | Image Credit: © Oleksandr - stock.adobe.com

New England Journal of Medicine website | Image Credit: © Oleksandr - stock.adobe.com

However, when reviewing the published biomedical research, it is easy to conclude that most clinical research results in positive findings, which may have an oversized influence on changing or guiding clinical decision-making. In this issue of Pharmacy Practice in Focus: Health Systems, Alana Hippensteele has contributed a timely piece focusing on the many facets of publication bias entitled, “From Pasteur to Present: Historical and Contemporary Perspectives on Publication Bias in Scientific Literature.” The inclusion of examples ranging from the 19th century laboratory of Louis Pasteur to modern references from the current century emphasizes that publication bias is certainly not a new issue and likely dates to the foundations of the scientific method and before the modern biomedical publishing industry. The dissemination and promotion of research findings that were favorable through publication, presentation, or demonstration has always taken precedence over “failed” research findings that did not fulfill the researcher’s
hypothesis. As discussed by Hippensteele, this form of publication bias runs the risk of overstating the size or direction of benefit or harm associated with medical intervention. Hippensteele also points out that even if all the relevant evidence is published, positive studies are more likely to be published in higher-tier, indexed journals with a higher impact factor, whereas negative research may appear in lower impact–factor journals that may or may not be indexed. This can strongly influence awareness or the ability to identify all the relevant evidence during a routine literature search.

Although there are potentially nefarious reasons that unfavorable results may go unpublished, many practical reasons likely have the greatest influence. For example, the investigator may decide to not pursue publication of unfavorable or “no-difference” results, assuming that there is a low likelihood of success, that it is not worth the time and effort needed to get through the authorship, peer review, and revision process; or that it may not further their research career or funding opportunities. The research becomes a victim of the so-called file drawer effect. In addition, there are only so many pages available to publish biomedical research, so editorial staff must be selective concerning what is considered for publication and the resulting influence on citations and impact factor. It is a competitive business, and biomedical publications need to be sustainable. It should also be understood that many studies with no-difference results are underpowered or have other methodologic challenges and therefore may fail to pass the “so what” test, raising legitimate concerns about the value of dissemination. As reviewed by Montori et al, studies most likely not to be published are those with smaller sample sizes, observational designs, no significant differences, or a relatively smaller magnitude of difference.1

This is an important issue that all health system pharmacists should be aware of when considering evidence-based practice decisions, especially with treatments that may have a reported narrow margin of benefit or small favorable risk. When reviewing treatment guidelines derived from systematic reviews or meta-analyses of the existing published literature, it is important to pay careful attention to how the evidence was identified (ie, search strategies employed), how the research was evaluated and selected, and whether appropriate statistical techniques were employed to address potential publication bias. In the current climate of questioning and mistrust related to scientific findings and traditional medical advice, the potential role of publication bias must be openly and transparently discussed as it relates to the uncertainty that may be introduced. There are clear examples from the past where meta-analysis results likely influenced by publication bias were later not supported by large, well-designed clinical trials.1

Among many other factors, the presence of publication bias introduces a degree of uncertainty that should remind us not to be too dogmatic in our interpretation of evidence. As Albert Einstein said, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”

REFERENCES
1. Montori VM, Smieja M, Guyatt GH. Publication bias: a brief review for clinicians. Mayo Clin Proc. 2000:75(12):1284-1288. doi:10.4065/75.12.1284

Newsletter

Stay informed on drug updates, treatment guidelines, and pharmacy practice trends—subscribe to Pharmacy Times for weekly clinical insights.

Related Videos
Hematology -- Image credit: DIgilife | stock.adobe.com
Image credit:  Artur | stock.adobe.com
Cardiovascular disease doctor or cardiologist holding red heart in clinic or hospital exam room office for csr professional medical service, cardiology health care and world heart health day concept - Image credit: Chinnapong | stock.adobe.com
Image credit:  kitsawet | stock.adobe.com