#MarksManInsights

Automation in Evidence Synthesis: Are We There Yet?

In the ever-evolving landscape of scientific research and evidence synthesis, the demand for timely, comprehensive, and reliable information has never been greater. Decision-makers, healthcare professionals, and researchers seek up-to-the-minute insights to inform their actions and conclusions. In response to this need, the concept of living systematic literature reviews (SLRs) has emerged, ushering in a new era of continuous evidence updates. However, the question that looms large is whether automation in evidence synthesis has caught up with the pace of this dynamic endeavor.[1]

Traditional SLRs have long been the gold standard for evidence synthesis. They involve a meticulous and often time-consuming process of gathering, appraising, and synthesizing data to provide a comprehensive overview of a particular topic. Yet, this approach is inherently static, lagging behind the ever-accelerating pace of scientific discovery. Living SLRs, on the other hand, offer a dynamic solution to this problem. These reviews are designed to evolve in parallel with the evidence being generated. They provide a continuous stream of up-to-date information, ensuring that stakeholders have access to the latest insights in real time. This approach is particularly invaluable in fields where the evidence base is rapidly changing, such as public health emergencies or emerging medical treatments.[2,3]

While the concept of living SLRs is undoubtedly promising, it comes with its own set of challenges. Conducting and maintaining such reviews can be resource-intensive and time-consuming. Reviewers face the formidable task of constantly monitoring newly published research and integrating it into the evolving review. Moreover, the speed at which new studies are published can introduce challenges related to the quality and reliability of the evidence. To address these challenges, automation has emerged as a potential ally. Automation tools have the potential to streamline various aspects of the review process – both traditional SLRs as well as living SLRs. Automation tools can potentially help with multiple steps in an SLR process, including reference retrieval, literature screening, data extraction, quality assessment, data synthesis, and reporting.[2]

One of the most time-consuming aspects of evidence synthesis is screening references for relevance from an initial pool of potential hits. Automation tools have introduced machine learning algorithms that actively prioritize relevant references. Many such tools implement these algorithms, expediting the screening process and reducing the workload. Additionally, automation tools employ machine learning and neural networks to extract data and predict the risk of bias for randomized controlled trials. These tools enhance the efficiency of data extraction and quality assessment, enabling reviewers to focus on the interpretation of results rather than the mechanics of data extraction. Automation also plays a crucial role in disseminating living evidence. [4-6]

Crowd-sourcing platforms have the potential to alleviate the burden on reviewers by outsourcing specific review tasks to students, researchers, or interested citizens. In addition to these specific SLR steps, many automation tools have been developed as web-based applications to support SLR workflows. They can automatically search databases like PubMed, pulling in references at regular intervals based on user-defined search strategies. This feature alone significantly reduces the manual effort required for reference retrieval.[3-5]

While automation tools have made significant headway in supporting the SLR process, challenges and opportunities lie ahead. Integration between tools to facilitate data synthesis remains a notable gap. Different review topics may require tailored synthesis methods, and interoperability between tools is crucial to ensure a seamless flow of data between stages. In addition to this integration challenge, there is a pressing need to develop automation methods that can retrieve evidence from a broader range of data sources, including preprint servers. Additionally, tools must be transparent and well-validated, instilling trust in their reliability. Moreover, the legal and ethical aspects of sharing raw data, especially before formal publication, present challenges. Ensuring the quality and clarity of preprints is essential to prevent misinformation.[1]

The increasing complexity of automation in the SLR process can potentially hamper the reproducibility of the research: novel solutions are needed to mitigate this concern and ensure consistent and replicable reviews. It is also equally important to consider if automated tools can accurately match human reviewers’ discernment in terms of extracting data, especially when human reviewers can uncover subtle insights and biases. All said, the current capacity of these automation tools to produce human-comparable insights remains challenging, particularly in nuanced aspects such as interpreting conflicting study results and evaluating qualitative research quality, where human judgment adds critical context and depth. Furthermore, compatibility with established values, such as rigor and transparency, is essential, emphasizing the need to double-check automated outputs for reproducibility and ensure transparency for accountability. Lastly, there is a pervasive skepticism and mistrust surrounding automation’s ability to replicate human judgment and value-based decisions. This skepticism underscores the necessity for human oversight and control as automation capabilities evolve. [1]

The journey toward achieving a harmonious synergy between automation and evidence synthesis is ongoing. With each step forward, we move closer to a future where decision-makers, healthcare professionals, and researchers can access the latest evidence at the speed of discovery. While we may not be there just yet, the path ahead holds great promise for the field of evidence synthesis and its ability to inform critical decisions in an ever-changing world of knowledge.

Become A Certified HEOR Professional – Enrol yourself here!

References

  1. Arno A, Elliott J, Wallace B, et al. The views of health guideline developers on the use of automation in health evidence synthesis. Systematic Reviews. 2021 Dec;10:1-0.
  2. Simmonds M, Elliott JH, Synnot A, et al. Living Systematic Reviews. Methods Mol Biol. 2022;2345:121-134.
  3. Schmidt L, Sinyor M, Webb RT, et al. A narrative review of recent tools and innovations toward automating living systematic reviews and evidence syntheses. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 2023 Aug 16.
  4. Van Altena AJ, Spijker R, Olabarriaga SD. Usage of automation tools in systematic reviews. Research synthesis methods. 2019 Mar;10(1):72-82.
  5. Marshall IJ, Wallace BC. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Systematic reviews. 2019 Dec;8:1-0.
  6. Khalil H, Ameen D, Zarnegar A. Tools to support the automation of systematic reviews: a scoping review. Journal of Clinical Epidemiology. 2022 Apr 1;144:22-42.

Discover more from Marksman Healthcare

Subscribe now to keep reading and get access to the full archive.

Continue reading