Systematic reviews (SRs) often face a major challenge while identifying all the relevant research, including randomised controlled trials (RCTs) irrespective of their publication status.(1) Unpublished, selectively reported, or non-reported research can lead to poorer quality clinical trials, thus leading to suboptimal care delivery. It also overlooks the opportunities for potential scientific progress.(2) Such incomplete publication status is a significant problem for SRs, which aim to present a comprehensive and appropriate evidence pool.

SR search strategies are typically designed to find all relevant evidence that answers a particular research question(3) and reduce bias.(4) However, it can get difficult to capture all the eligible studies for an SR through a bibliographic database search. This may be because of poor study indexing in databases, failure of constructing a comprehensive search strategy which includes all relevant terms, or studies getting buried in grey literature. Also, it can be difficult to access studies within a trial to look for unpublished findings. Unpublished studies are particularly often unidentifiable, and if excluded, can lead to incorrect estimation of effects.(5) Further contributing to bias in SR findings are the facts that findings of half of the RCTs never get published, and also, the publication status is often influenced by the nature and direction of the results.(6)

Several initiatives can be undertaken to reduce publication bias. One of them is the trial registration policy initiated by the International Committee of Medical Journal Editors (ICMJE) in 2005,(7) followed by the mandate by the US Food and Drug Administration Amendments Act to post the study results at ClinicalTrials.gov no later than a year after the date of final data collection for the primary outcome, for all phase II to IV trials of drugs, devices, and biological treatments.(8,9) A noticeable increase was observed in the trial registration after these policies were implemented.(6,10) Consequently, the search of trial registries while conducting SRs is considered vital.(11)

Another way of finding unpublished research is directly contacting trial units (CTUs) and investigators. Evidence from literature suggests that contacting investigators can prove beneficial in finding published and unpublished eligible studies, which are otherwise difficult to find for intervention SRs.(3,4) Brueton et al(12) in a survey of CTUs from the UK, communicated with colleagues working in RCT methodology, while also conducting standard searches of bibliographic databases, conference abstracts and reference list to look for additional studies for a Cochrane methodology SR of strategies to improve retention in RCTs. In this study, the authors found that principal investigators were willing to contribute results from unpublished RCTs, thus majorly contributing to the overall study findings.

Personal communication has been used in the past while conducting methodology reviews to identify unpublished research. Evidence from literature even suggests that email communication is the best method of communication in such situations.(4) Furthermore, researchers can also consider exploring the Studies within a Trial (SWAT) database of evaluations of methods for RCTs to identify ongoing embedded methodology RCTs for future SRs of research methodology.(13)

To conclude, for identifying unpublished research, as well as to reduce bias, it is essential for researchers to look for other sources than those routinely checked by default. Contacting trial units, searching SWATs database, and direct communication with the study investigators, are some of these sources that can help find important results that can help in critical decision-making, such as finding data on possible adverse outcomes.

References

  1. Baudard M, Yavchitz A, Ravaud P, et al. Impact of searching clinical trial registries in systematic reviews of pharmaceutical treatments: methodological systematic review and reanalysis of meta-analyses. BMJ. 2017; 356:j448.
  2. Knottnerus JA, Tugwell P. The potential impact of unpublished results. J Clin Epidemiol. 2013; 66(10):1061-3.
  3. Lefebvre C, Manheimer E, Glanville J. Searching for Studies. In: Higgins JPT, Green S, editors. Cochrane Handbook for Systematic Reviews of Interventions. England: Wiley; 2008. Chapter 6 p. 95–150.
  4. Young T, Hopewell S. Methods for obtaining unpublished data. Cochrane Database of Syst Rev. 2011(11).
  5. Sterne J, Egger M, Moher D. Addressing reporting bias. In: Higgins JPT, Green S, editors. Cochrane Handbook for systematic reviews of Interventions. England: Wiley; 2008. Chapter 10 p. 297–333.
  6. Schmucker C, Schell LK, Portalupi S, et al. Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One. 2014; 9(12):e114023.
  7. De Angelis C, Drazen JM, Frizelle FA, et al. International Committee of Medical Journal Editors. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Ann Intern Med.2004; 141:477-8.
  8. United States Congress. (2007) Food and Drug Administration Amendments Act (FDAAA) of 2007: public law no 110-85. gpo.gov/fdsys/pkg/PLAW-110publ85/pdf/PLAW-110publ85.pdf
  9. Groves T. Mandatory disclosure of trial results for drugs and devices. 2008; 336:170.
  10. Laine C, Horton R, DeAngelis CD, et al. Clinical trial registration: looking back and moving ahead. 2007; 298:93-4.
  11. Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. 2009; 339:b2535.
  12. Brueton V, Tierney JF, Stenning S, Rait G. Identifying additional studies for a systematic review of retention strategies in randomised controlled trials: making contact with trials units and trial methodologists. Syst Rev. 2017; 6(1):167.
  13. Education section – Studies Within A Trial (SWAT). Journal of Evidence-Based Medicine. 2012; 5(1):44–5.

Related Posts