• Living SLRs: an Approach to Enhance Accuracy and Recency of SLRs

    Living SLRs: an Approach to Enhance Accuracy and Recency of SLRs

    Living systematic literature reviews (SLRs) are a type of SLRs that are continually updated by periodically including relevant new evidence as and when it becomes available. SLRs are often considered to occupy the top of the evidence pyramid because they synthesize evidence from different sources and present a summary of the evidence, thus enabling clinical and policy-level decision-making.

    Thus, it becomes essential that SLRs are of high quality, and are updated to include the latest available information.(1) Traditional SLRs that are published in high-quality journals can be expected to be of high quality, but lag when it comes to the ‘updated’ aspect because such SLRs represent static depictions of snapshots of the evidence at the time the research was published.(2) With the emergence of new evidence in the field, some of the recommendations given in an SLR that was published previously might become outdated, thereby challenging the validity of the guidelines that were developed using the SLR.(3)
    Thus, while it is difficult to update an SLR, failure to do so results in lower accuracy and recency of the SLRs.(4)

    Living SLRs is an approach that tries to resolve this problem.  Living SLRs are high-quality, up-to-date, sometimes online, evidence summaries that help identify new trends and developments in the field. A living SLR involves regular literature screening (e.g., monthly), through which newly detected studies are added to the review. Accordingly, metrics such as meta-analysis or other summary measures are also updated with new study results, thereby leading to an updated review of findings and conclusions.(5)

    Living SLRs are prepared following a review process similar to that of regular SLRs; however, after the initial publication, the literature is monitored and new results are incorporated as they become available. Continuous monitoring makes it possible to offer the most recent data at all times and further supports the validation of previous conclusions based on the most recent findings in the given field. This guarantees that clinical recommendations, which are largely based on SLRs, take benefit of the most recent clinical data.(5)

    Since living SLRs necessitate a continuous workflow, the effort required is moderate, coordinated over long periods, and involves a gradual evolution in the review team, as opposed to the intensive, sporadic effort of standard SLRs and traditionally updated SLRs. Approaches such as machine learning (RCT classifier) and citizen science (Cochrane Crowd) are often utilized to expedite the evidence-screening process.(6)

    Recently, especially with living SLRs that are available online, there have been efforts to improve data visualization and relevance, thereby enhancing the user experience, through the usage of AI. Recent innovations have made it possible for the user to select the outcome of interest, with the usage of features such as interactive portals, user-friendly platforms, customizable inclusion criteria, and automatically scheduled updates.(7)

    Living SLRs do have certain challenges as well; probably the most important ones are related to the workload as it requires a larger investment than traditional SLRs. An equally challenging concern is the need to engage a large and dedicated team to constantly work on the updates, including tracking ongoing studies, locating full-text articles, chasing trial authors for data, screening the articles, data management, updating PRISMA chart, and results tables. With offline (published) living SLRs, editors need to set up peer reviews in advance to prevent delays, which can also be challenging.(8) The process of republishing reviews and triggering a new DOI may also negatively affect citation counts and impact factors. The process requires a continuous workflow with frequent statistical analysis which can lead to an inflated false-positive rate.(8)

    With more research published in the scientific literature over the past few years, the potential pool of qualified studies for any particular SLR is expected to grow with time. With the advent of new technology to improvise the healthcare process, living SLR is proving as a realistic approach for updating SLRs.(9) Though automation in the form of AI is increasingly being used to speed up research screening, human intervention is inevitable for ensuring high-quality screening and data extraction from relevant studies.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Garner P, Hopewell S, Chandler J, et al. When and how to update systematic reviews: consensus and checklist. BMJ. 2016 Jul 20;354:i3507. doi: 10.1136/bmj.i3507. Erratum in: BMJ. 2016 Sep 06;354:i4853..
    2. White A, Schmidt K. Systematic literature reviews. Complement Ther Med. 2005 Mar;13(1):54-60. doi: 10.1016/j.ctim.2004.12.003.
    3. Shojania KG, Sampson M, Ansari MT, et al. How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med. 2007 Aug 21;147(4):224-33. doi: 10.7326/0003-4819-147-4-200708210-00179.
    4. Simmonds M, Elliott JH, Synnot A, Turner T. Living Systematic Reviews. Methods Mol Biol. 2022;2345:121-134. doi: 10.1007/978-1-0716-1566-9_7.
    5. Akl EA, Meerpohl JJ, Elliott J, et al. Living Systematic Review Network. Living systematic reviews: 4. Living guideline recommendations. J Clin Epidemiol. 2017 Nov;91:47-53. doi: 10.1016/j.jclinepi.2017.08.009. Epub 2017 Sep 11. PMID: 28911999.
    6. Noel-Storr A. Working with a new kind of team: harnessing the wisdom of the crowd in trial identification. EFSA J. 2019 Jul 8;17(Suppl 1):e170715.
    7. https://www.cytel.com/live-slr
    8. Millard T, Synnot A, Elliott J, et al. Feasibility and acceptability of living systematic reviews: results from a mixed-methods evaluation. Syst Rev. 2019 Dec 14;8(1):325. doi: 10.1186/s13643-019-1248-5. PMID: 31837703; PMCID: PMC6911272.
    9. Thomas J, Noel-Storr A, Marshall I, et al. Living Systematic Review Network. Living systematic reviews: 2. Combining human and machine effort. J Clin Epidemiol. 2017 Nov;91:31-37. doi: 10.1016/j.jclinepi.2017.08.011. Epub 2017 Sep 11. PMID: 28912003.
  • The STaRT RWE Template: Improving Reporting of RWE Studies

    The STaRT RWE Template: Improving Reporting of RWE Studies

    Improvements in healthcare digitalization and accelerated regulatory approvals of novel interventions have boosted the possibilities for gathering real-world data (RWD) and using the resultant real-world evidence (RWE) to support the generalizability, efficacy, and safety of interventions and medical devices, assisting healthcare decision-makers and policymakers.[1-5] RWE helps establish clinical guidelines, perform risk assessment for medication safety, improve market access, and undertake various epidemiological evaluations. However, RWE studies are typically collaborative, interdisciplinary, and involve multiple databases, and are generally collected for non-research purposes. As a result, such studies have considerable variation in study designs and analytical parameters. This brings in a challenge in terms of reliability of RWE, and calls for innovations to ensure the robustness of RWE research methodologies and outcomes.[6]

    The lack of a uniform structure in reporting RWE studies has also led to a requirement of significant efforts on the part of regulatory organizations to assess such studies. To facilitate critical evaluation of published RWE studies, and also to facilitate appropriate conduct of ongoing RWE studies, several checklists, methods, and guidelines have been developed. Notable examples include the Assessment of Real-World Observational Studies (ArRoWS) critical appraisal tool, the European network of centres for pharmacoepidemiology and pharmacovigilance checklist, and many more.[7-10] Nevertheless, these guidelines and checklists are generic, allowing ambiguity, presumptions, and incorrect interpretation while planning RWE studies. This prompted the development of the Structured Template and Reporting Tool for RWE (STaRT-RWE) template in 2021.[6] 

    The driving force behind the development of STaRT-RWE was the realization that reproducible research requires clear communication of scientific methods and results. This template was developed through the collaboration of the Professional Society for Health Economics and Outcomes Research (ISPOR) and the International Society for Pharmacoepidemiology (ISPE).[6] The template has been structured in such a way that it can be adapted with a variety of RWD sources and RWE study designs. It allows for addressing the methodological problems in RWE studies. The template supports the  design, execution, and evaluation of RWE studies to aid in healthcare decision making.

    This template uses structured tables and figures to show the user where, what, and how to provide specifics of study implementation.[6] Structured tables facilitate communication by making it easy to locate relevant data on important study variables, which helps both research teams and end consumers. These tables also establish how these parameters were measured and set clear expectations for study implementation that must be reported. It also allows sharing code, algorithms, and data for computational reproducibility. 

    STaRT-RWE complements other available checklists by offering a framework to aid researchers in being explicitly thorough in research planning, implementation, and communication.[6] The template, unlike a checklist, minimizes unclear writing and the possibility of misinterpretation by employing tabular and graphic representations.

    There are certain limitations to the STaRT-RWE template. Although the template is meant to be flexible, the rigidity of the tables may not be a suitable fit for many studies. Only a portion of the STaRT-RWE tables may be put to use depending on the investigation. Using the study implementation template to assist research design does not ensure that the decisions will result in impartial results. However, clear explanations of how research results were obtained and what methods were employed to counteract potential biases can considerably aid reviewers in correctly interpreting study results. Furthermore, the template’s primary emphasis is on choices related to study implementation. Despite including a section on the data sources, the template fields do not fully capture the details required to determine whether the data are appropriate for the purpose.[6] This template is a user guide for reproducing RWE studies. It facilitates repeatability, validity evaluation, and evidence synthesis for efficient decision-making. 

    The STaRT-RWE template can potentially enable researchers to adhere to the standards established by professional organizations for conducting and publishing RWE research, thereby minimizing the ambiguity and misinterpretation of using non-standard terminologies in RWE studies. This in turn can allow regulators and decision-makers to use RWE to the fullest possible extent to improve patient access to safe and effective medicines.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Katkade VB, Sanders KN, Zou KH. Real-world data: an opportunity to supplement existing evidence for the use of long-established medicines in health care decision making. J Multidiscip Healthc. 2018 Jul 2;11:295-304. doi: 10.2147/JMDH.S160029.
    2. Jaksa A, Mahendraratnam N. Learning from the past to advance tomorrow’s real-world evidence: what demonstration projects have to teach us. J Comp Eff Res. 2021 Nov;10(16):1169-1173. doi: 10.2217/cer-2021-0166.
    3. Hampson G, Towse A, Dreitlein WB, Henshall C, Pearson SD. Real-world evidence for coverage decisions: opportunities and challenges. J Comp Eff Res. 2018 Dec;7(12):1133-1143. doi: 10.2217/cer-2018-0066.
    4. Corrigan-Curay J, Sacks L, Woodcock J. Real-World Evidence and Real-World Data for Evaluating Drug Safety and Effectiveness. JAMA. 2018 Sep 4;320(9):867-868. doi: 10.1001/jama.2018.10136
    5. Sherman RE, Anderson SA, Dal Pan GJ, et al. Real-World Evidence – What Is It and What Can It Tell Us? N Engl J Med. 2016 Dec 8;375(23):2293-2297. doi: 10.1056/NEJMsb1609216.
    6. Wang SV, Pinheiro S, Hua W, et al. STaRT-RWE: structured template for planning and reporting on the implementation of real world evidence studies. BMJ. 2021 Jan 12;372:m4856. doi: 10.1136/bmj.m4856.
    7. Kurz X, Perez-Gutthann S; ENCePP Steering Group. Strengthening standards, transparency, and collaboration to support medicine evaluation: Ten years of the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCePP). Pharmacoepidemiol Drug Saf. 2018 Mar;27(3):245-252. doi: 10.1002/pds.4381.
    8. Coles B, Tyrer F, Hussein H, Dhalwani N, Khunti K. Development, content validation, and reliability of the Assessment of Real-World Observational Studies (ArRoWS) critical appraisal tool. Ann Epidemiol. 2021 Mar;55:57-63.e15. doi: 10.1016/j.annepidem.2020.09.014.
    9. Langan SM, Schmidt SA, Wing K, et al. The reporting of studies conducted using observational routinely collected health data statement for pharmacoepidemiology (RECORD-PE). BMJ. 2018 Nov 14;363:k3532. doi: 10.1136/bmj.k3532.
    10. Allen A, Patrick H, Ruof J,et al. Development and Pilot Test of the Registry Evaluation and Quality Standards Tool: An Information Technology-Based Tool to Support and Review Registries. Value Health. 2022 Aug;25(8):1390-1398. doi: 10.1016/j.jval.2021.12.018.