• Impact of Including Studies from Predatory Journals in Systematic Literature Reviews

    Impact of Including Studies from Predatory Journals in Systematic Literature Reviews

    Impact of Including Studies from Predatory Journals in Systematic Literature Reviews

    Systematic reviews are the foundation of evidence-based practice; trusted by clinicians, policymakers, and researchers to inform healthcare decisions and policies. Their reliability depends on the robust and unbiased study selection, ensuring the representation of highest standards of scientific review while publishing the final synthesized evidence. However, with the rapidly expanding open-access publishing, the academia is increasingly encountering the challenge of predatory journals, which are publications that purport to be authentic academic channels but lack transparent peer review, editorial supervision, and publication ethics.(1) This can substantially threaten the validity of systematic reviews that include these studies; thereby hampering the quality of evidence that guides policy decisions and practice.(2-4)

    Including studies from predatory journals for systematic reviews severely jeopardizes the integrity of the end results. Predatory journals typically accept manuscripts with nominal or no peer review instead of article processing fees. Consequently, they may publish poorly designed, practically unsound, or even forged research. If such studies are incorporated into a systematic review, particularly when not marked or critically evaluated, they can cause significant bias, twist effect estimates, and weaken the evidence quality overall. This is especially dangerous in clinical settings, where recommendations based on counterfeited evidence could mislead treatment decisions or health interventions.(2-4)

    These concerns are augmented with the difficulty of detecting predatory journals, because many have misleadingly professional websites, impressive-looking editorial boards, and titles similar to those of respectable journals. They are increasingly indexed in disputed databases and may even show up in search results on platforms, such as Google Scholar.(2) In case of fast-paced or resource-limited reviews, particularly in low- and middle-income countries, reviewers may unintentionally incorporate articles from such journals without a detailed scrutiny, especially if the review does not use journal quality filters or assess the transparency of peer review.(2-5)

    Researchers are making efforts to address this challenge. Checklists, including those designed by the Think. Check. Submit. campaign,(6) and blacklists like Beall’s list (no longer operational but archived) (7) or Cabells Predatory Reports,(8) are instruments for reviewers to screen sources. However, ultimately, it all comes down to the robustness of the systematic review protocols to include specific exclusion criteria for predatory journals by means of rigorous use of quality appraisal tools. Review teams must also be well-equipped to identify red flags and make decisions if in case of a doubtful validity of a particular source.(1, 2, 5)

    In conclusion, including articles from predatory journals in systematic reviews hampers academic integrity and weakens the trust in science; thereby jeopardizing clinical decisions. Robust vigilance, transparency, and compliance with stringent methodological standards are indispensable to maintain the evidence base of systematic reviews.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Elmore SA, Weston EH. Predatory Journals: What They Are and How to Avoid Them. Toxicol Pathol. 2020; 48(4):607-610.
    2. Munn Z, Barker T, Stern C, et al. Should I include studies from “predatory” journals in a systematic review? Interim guidance for systematic reviewers. JBI Evid Synth. 2021; 19(8):1915-1923.
    3. Rice DB, Skidmore B, Cobey KD. Dealing with predatory journal articles captured in systematic reviews. Syst Rev. 2021 Jun 11;10(1):175.
    4. Pollock D, Barker TH, Stone JC, Aet al. Predatory journals and their practices present a conundrum for systematic reviewers and evidence synthesisers of health research: A qualitative descriptive study. Res Synth Methods. 2024; 15(2):257-274.
    5. Ross-White A, Godfrey CM, Sears KA, Wilson R. Predatory publications in evidence syntheses. J Med Libr Assoc. 2019; 107(1):57-61.
    6. Check. Submit. [Accessed online on 9th July 2025]. Available from: https://thinkchecksubmit.org
    7. Beall J. Criteria for Determining Predatory Open-Access Publishers. [Accessed online on 9th July 2025]. Available from: https://beallslist.weebly.com/uploads/3/0/9/5/30958339/criteria-2015.pdf
    8. Cabells Predatory Reports. [Accessed online on 9th July 2025]. Available from: https://cabells.com/solutions/predatory-reports
  • Impact and Management of Retracted RCTs in Systematic Literature Reviews

    Impact and Management of Retracted RCTs in Systematic Literature Reviews

    Management

    Conducting a systematic literature review (SLR) is a detail-oriented, scrupulous process that hinges on the reliability of included studies, particularly randomized controlled trials (RCTs) that are the gold standard for scientific evidence. Therefore, when an RCT included in an SLR is later retracted, it can seem like a foundational crack compromising the entire structure.(1, 2) Sadly, retractions are quite common and so are their consequences. A 2025 JAMA meta-analysis reports 35% of meta-analyses experiencing at least a 10% change in effect estimates after eliminating retracted studies, with some even invalidating conclusions or entirely losing statistical significance.(3) The impact of retracted RCTs goes beyond academic distinction; influencing clinical guidelines, treatment decisions, and eventually patient care, allowing the approval of faulty results long after discreditation.(1-4)

    Immediate damage assessment is the first step as soon as a retraction is identified. This can be done by re-running the analysis without the retracted RCT and carefully observing the effect of the exclusion on the results. The 2025 VITALITY Study observes that in 8.4% of meta-analyses, the direction of effects was reversed after the exclusion of retracted trials, while 16% of meta-analyses observed loss of statistical significance.(5) Such shifts may trigger researchers to take quick action, especially if the findings are already published. Journals are becoming increasingly alert, as seen in the Korean Journal of Anesthesiology case, where authors were asked to conduct a full re-analysis when a retraction appeared late in peer review.(6) With transparency being non-negotiable, it is essential for reviewers to clearly document the reason for exclusion, link to the retraction notice, and correct any summary of results accordingly.(1-4)

    It is equally crucial to responsibly modify the scientific record. If the SLR has already been published or shared as a preprint, releasing a correction, error, or updated version becomes ethically important. However, this is more like an exception rather than the rule, wherein only 5% of systematic reviews citing later-retracted RCTs actually change their findings. This inaction has real-world impact. As of 2024, 157 clinical guidelines still referenced meta-analyses tarnished by retracted studies.(5) To avoid adding to this cycle, it is ideal to cite retracted studies with “[RETRACTED]” in the references, explaining the reason in-text, including methodological errors, data fabrication, or ethical violations.(1-4)

    Reducing future risk safeguards the process of systematic reviews. Retraction screening is necessary at multiple checkpoints, such as during database searches, manuscript writing, submission, and even post publication. Several tools and databases, such as Retraction Watch, PubMed’s “Publication Status” filter, and citation managers like Zotero that mark retracted papers, can facilitate this process.(7) The Cochrane Handbook clearly specifies confirming study status to avoid unintentional inclusion of disputed information.(8) Also, pre-defining sensitivity analyses in the systematic review protocol can help ease the impact of important studies, particularly large or outlier RCTs, on the overall direction and strength of the findings.(1-4)

    Furthermore, the presence of “zombie data” lingers with retracted studies still listed in citations, thus uncovering inherent systemic gaps. Nearly 40% of faulty meta-analyses have been observed to include retracted RCTs even after their official withdrawal, which can primarily be attributed to poor screening protocols. Additionally, a 2022 review observes only 6% of post-retraction citations acknowledging the study’s invalid status.(1)

    It is the need of the hour for journals and databases to develop dependable defences like automated retraction alerts and obligatory checks at submission, as until then, it is the responsibility of researchers to remain vigilant. Safeguarding the reliability of evidence synthesis is more than just about data management, it includes maintaining trust in science and ensuring flawed studies stop shaping policy or clinical practice decisions.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Kataoka Y, Banno M, Tsujimoto Y, et al. Retracted randomized controlled trials were cited and not corrected in systematic reviews and clinical practice guidelines. J Clin Epidemiol. 2022 Oct;150:90-97.
    2. Kataoka Y, Banno M, Tsujimoto Y, et al. The impact of retracted randomised controlled trials on systematic reviews and clinical practice guidelines: a meta-epidemiological study. Journal of Clinical Epidemiology. 2022.
    3. Graña Possamai C, Cabanac G, Perrodeau E, et al. Inclusion of Retracted Studies in Systematic Reviews and Meta-Analyses of Interventions: A Systematic Review and Meta-Analysis. JAMA Intern Med. 2025;185(6):702–709.
    4. Wartolowska K. Retracted RCTs and clinical guidelines. February 2019. [Accessed online on 16th June 2025]. Available at: https://www.bennett.ox.ac.uk/blog/2019/02/retracted-rcts-and-clinical-guidelines/
    5. Xu C, Fan S, Tian Y, et al. Investigating the impact of trial retractions on the healthcare evidence ecosystem (VITALITY Study I): retrospective cohort study. BMJ 2025; 389:e082068.
    6. Choi GJ, Kang H. On the road to make KJA’s review process robust, transparent, and credible: retracted study in systematic review. Korean J Anesthesiol 2022; 75(3):197-199.
    7. Bakker C, Boughton S, Faggion CM, et al. Reducing the residue of retractions in evidence synthesis: ways to minimise inappropriate citation and use of retracted data. BMJ Evid Based Med. 2024; 29(2):121-126.
    8. Higgins JPT , Thomas  J , Chandler  Searching for and selecting studies. In: Higgins  JPT , Thomas  J , Chandler  J , eds. Cochrane Handbook for Systematic Reviews of Interventions. Chichester, UK: John Wiley & Sons, Ltd, 2019: 67-107.
  • Assessing the Reliability of Published Systematic Literature Reviews

    Assessing the Reliability of Published Systematic Literature Reviews

    Systematic Literature Reviews (SLRs) are the gold standard in evidence synthesis, occupying the pinnacle of the evidence pyramid.[1] Their trustworthiness is paramount, as SLRs frequently form the foundation of evidence-based guidelines and consensus statements.[2] SLRs differ from narrative reviews because the former aims to provide a comprehensive, unbiased summary of all relevant research on a specific question, considering all possible evidence, both favoring and opposing a particular topic of interest. SLRs are also quite helpful in identifying gaps in the current knowledge, thereby providing a direction in terms of future research efforts. Thus, it becomes essential that the methods employed in conducting an SLR are robust, authentic, and reliable so that the resultant evidence can be trustworthy.[1,2]

    Poorly conducted or reported SLRs can have wide-ranging negative effects. Despite their objective to provide an evidence-based synthesis, SLRs at times do not meet the rigorous standards expected. Critics argue that such SLRs of poor methodological quality or high degree bias contribute to research waste, and can be misleading or serve conflicted interests.[3] Many poor-quality reviews continue to be published, even though clear guidance has been available.[4]

    In an era of perverse academic incentives, the publication of redundant, overlapping, unreliable, or poor-quality SLRs is still relentless. Research has identified several issues, including redundancy, with multiple SLRs covering the same topic, often with similar conclusions. Methodological flaws are also prevalent, such as inadequate search strategies, incomplete data extraction, and poor statistical analyses. Furthermore, biased conclusions are a problem, with exaggerated or misleading interpretations of results. Poor reporting is another issue, with inadequate disclosure of methods, conflicts of interest, or funding sources. These flaws have been highlighted in various studies, but their impact is not being adequately addressed. Consolidating these findings is crucial to understanding the scale of the problem and pushing for improvements in SR quality.[5,6] A study conducted in 2023 found that between 2000 and November 2022, at least 485 articles documented issues with published SLRs, ranging from editorials highlighting concerns over specific reviews to rigorous analyses of issues with hundreds or thousands of reviews.[7]

    To ensure systematic reviews achieve their potential as reliable sources of evidence, it is essential to implement specific measures and maintain rigorous standards. SLRs should aim to include all relevant studies. Problems arise when relevant studies are missed or ignored, which can compromise the review’s validity. These issues can stem from overly stringent inclusion criteria, exclusion of grey literature, insufficient or outdated literature searches, and language restrictions. Additionally, appropriate methods must be used to ensure methodological soundness of the review. Errors in conducting the review or a lack of expertise can jeopardize the review’s internal validity. Issues including data extraction errors, flawed risk of bias assessments, limited quality assessment, and failure to incorporate risk of bias into conclusions can contribute to this.[7]

    To ensure the reproducibility of systematic literature reviews, it is essential to report their methods in sufficient detail. Poor reporting quality or inaccessible methods can hinder the ability of others to replicate the review’s findings. This is particularly problematic when reviews are used to inform important decisions. To address this issue, review authors should adhere to reporting guidelines like PRISMA 2020 (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and register their review protocols in databases such as PROSPERO.[7, 8]

    SLRs can become outdated over time due to the rapid pace of scientific research. New studies are constantly being published, and these can introduce new evidence that may challenge or modify the findings of existing SLRs. Additionally, the context in which SLRs are conducted can change, rendering previous findings less relevant or applicable. Living SLRs are a dynamic approach to evidence synthesis that addresses this limitation of traditional SLRs. Living SLRs are continuously updated as new research emerges. This ensures that the conclusions and recommendations of the review remain relevant and accurate over time.[8]

    Tools such as AMSTAR 2 (Assessment of Multiple Systematic Reviews) and ROBIS have been developed to assess the methodological quality and risk of bias of SLRs that have already been published. Such tools evaluate whether published SLRs have high quality in terms of internal validity, bias, and quality. Further, conducting double-checks of data and contacting statistical experts ensures results consistency and validates findings, increasing confidence in the review’s conclusions. Results should be interpreted with careful consideration of quality, risk of bias, and certainty, and any limitations or gaps in the evidence base should be acknowledged. Additionally, disclosing potential conflicts of interest and managing researcher bias are critical to ensuring that SR conclusions are not unduly influenced by conflicted parties, and that the review’s findings can be trusted by stakeholders.[8-12]

    In conclusion, the reliability of SLRs is crucial in guiding healthcare and policy decisions. As research continues to expand, ensuring the integrity and rigor of these systematic reviews is more important than ever. By following best practices, maintaining clarity, and properly applying methodological frameworks, the scientific community can safeguard the credibility of SLRs.

    Become A Certified HEOR Professional – Enrol yourself here!

    References:

    1. Murad MH, Asi N, Alsawas M, Alahdab F. New evidence pyramid. Evid Based Med. 2016 Aug;21(4):125–7.
    2. Uttley L, Quintana DS, Montgomery P, Carroll C, Page MJ, Falzon L, Sutton A, Moher D. The problems with systematic reviews: a living systematic review. Journal of Clinical Epidemiology. 2023 Apr 1;156:30-41.
    3. Dr Jenny McSharry, What health evidence can we trust when we need it most? Cochrane news https://www.cochrane.org/news/what-health-evidence-can-we-trust-when-we-need-it-most.
    4. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta‐analyses. The Milbank Quarterly. 2016 Sep;94(3):485-514.
    5. Uttley L, Montgomery P. The influence of the team in conducting a systematic review. Systematic reviews. 2017 Dec;6:1-4.
    6. Chapelle C, Ollier E, Bonjean P, Locher C, Zufferey PJ, Cucherat M, Laporte S. Replication of systematic reviews: is it to the benefit or detriment of methodological quality?. Journal of Clinical Epidemiology. 2023 Aug 28.
    7. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. bmj. 2021 Mar 29;372.
    8. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. bmj. 2017 Sep 21;358.
    9. Bristol U of. ROBIS tool [Internet]. University of Bristol. Available from: https://www.bristol.ac.uk/population-health-sciences/projects/robis/robis-tool/
    10. Dang A, Chidirala S, Veeranki P, Vallish BN. A Critical Overview of Systematic Reviews of Chemotherapy for Advanced and Locally Advanced Pancreatic Cancer using both AMSTAR2 and ROBIS as Quality Assessment Tools. Rev Recent Clin Trials. 2021;16(2):180-192.
    11. Uttley L, Quintana DS, Montgomery P, Carroll C, Page MJ, Falzon L, Sutton A, Moher D. The problems with systematic reviews: a living systematic review. Journal of Clinical Epidemiology. 2023 Apr 1;156:30-41.
    12. Pussegoda K, Turner L, Garritty C, Mayhew A, Skidmore B, Stevens A, Boutron I, Sarkis-Onofre R, Bjerre LM, Hróbjartsson A, Altman DG. Systematic review adherence to methodological or reporting quality. Systematic reviews. 2017 Dec;6:1-4

  • The EconLit Database: Unlocking the Wealth of Health Economic Knowledge

    The EconLit Database: Unlocking the Wealth of Health Economic Knowledge
    The EconLit Database Unlocking the Wealth of Health Economic Knowledge

    The EconLit database has established itself as the definitive cornerstone for published economic literature. This resource curated by the American Economic Association (AEA), has evolved significantly from its original role as a basic bibliography of economic works. With the rise of digital technology in the 1990s, EconLit became an electronic database, extending its reach and utility with the growing need for readily available economic information. With extensive findings dating back to the late 1980s, this database offers vast archive of Economic literature, providing a comprehensive historical perspective. Updated on a weekly basis, this expertly managed repository encompasses literature from prominent organizations across more than 74 countries and over 130 years.[1]

    This database is known to include entries from over 1,000 reputed journals, with an optional full-text package of more than 500 journals, encompassing diverse and comprehensive collection of economic literature. Selected by the AEA based on their relevance and significance to the economic domain, these journals provide high quality data to index in the database.[1]

    EconLit’s stringent indexing protocols further ensure proper and precise categorization of each record in the database with its relevant subject descriptors, substantiating the core reliability of the database. This facilitates users in efficiently finding the information they require.[1]

    The EconLIt database employs a standard classification system, called the Journal of Economic Literature (JEL) system to classify and organize literature in the field of economics. This system facilitates the categorization of various types of scholarly works, including articles, dissertations, books, book reviews, and working papers. Each JEL code consists of a single alphabet followed by a two-digit numerical character. For example, A00 represents “General Economics and Teaching,” A12 indicates “Relation of Economics to Other Disciplines,” and this pattern continues through to Z, where Z12 denotes “Cultural Economics: Religion,” Z13 refers to “Economic Sociology; Economic Anthropology; Social and Economic Stratification,” and Z19 signifies “Cultural Economics: Other.” This system enhances appropriate categorization and retrieval of economic literature within the EconLit database, ensuring that users can effectively navigate the database and locate relevant resources.[2]

    EconLit also uses “official” subject headings, meticulously crafted by the AEA. Paired with JEL Classification codes, these subject headings help the users to identify the relevant records accurately. With its straight, smart, and user-friendly interface, EconLit allows users to perform basic and advanced keyword searches and browse any subject of choice, to filter appropriate literature results by author list, title, or publication date. These features, complemented by Boolean searches, enable users to combine multiple search terms to narrow down their results effectively. In addition to these search functionalities, EconLit provides abstracts for most entries, offering users a snapshot of the content before diving into the full text. This feature is particularly useful for researchers conducting preliminary reviews of literature, saving time and ensuring that they can quickly identify relevant studies.[1-3]

    In academia, EconLit is an indispensable tool for conducting economic literature reviews. It helps researchers in identifying gaps in existing research and formulating new research questions. Furthermore, policymakers and practitioners rely on EconLit to access empirical research and theoretical evaluations that guide policy decisions and strategic decision-making. HTA bodies such as the UK NICE often recommend that the industry submissions for cost-effectiveness evidence through comprehensive health technology assessments include EconLit as a source for literature.[4]

    The interdisciplinary nature of EconLit also promotes collaboration across various fields of science. Economics frequently overlaps with other areas such as healthcare, sociology, political science, and environmental research, facilitating cross-disciplinary investigations and allowing researchers to draw connections between economics and other social sciences.[1,3]

    In summary, the EconLit database is a vital resource for researchers and policymakers in the field of health economics. It provides a comprehensive database of literature that aids in critical analyses and supports evidence based decision-making. With its vast array of peer-reviewed articles, books, and dissertations, it not only enables a deep dive into the principles of health economics but also encourages inter-disciplinary insights into healthcare policy and practice. By providing access to a rich repository of empirical research and theoretical models, EconLit continues to play a critical role in driving advancements in understanding the economic elements influencing health outcomes and the healthcare delivery worldwide.

    Become A Certified HEOR Professional – Enrol yourself here!

    References:

    1. American Economic Association.  About EconLit. Available from https://www.aeaweb.org/econlit/.
    2. American Economic Association. JEL Classification Codes. Available from https://www.aeaweb.org/econlit/jelCodes.php.
    3. EconLit Database Guide 2006, by Sharon Stillwagon, CSA Training & Information Specialist. Available from https://www.bus.umich.edu/kresgelibrary/downloads/instruction/econlit_guide.pdf.
    4. NICE. Incorporating economic evaluation. In: Developing NICE guidelines: the manual. NICE process and methods [PMG20]. Last updated 29 May 2024 Available from: https://www.nice.org.uk/process/pmg20/chapter/incorporating-economic-evaluation
  • Reporting Systematic Review of Systematic Reviews: The Significance of PRIOR Statement

    Reporting Systematic Review of Systematic Reviews: The Significance of PRIOR Statement

    In healthcare research, evidence-based decision-making is essential, elevating systematic literature reviews (SLRs) as the go-to method. These reviews meticulously assess and consolidate research on specific topics, setting the standard for evidence-based practice. Yet, with the surge in SLRs, the need for effective methods to navigate and synthesize their outcomes became crucial. This gave rise to overviews of SLRs (also referred to as umbrella reviews, meta-reviews, or cumulative reviews), offering a comprehensive view by amalgamating findings from multiple SLRs.[1]

    Umbrella reviews offer a comprehensive understanding of the current state of knowledge on a specific topic by consolidating the findings from multiple SLRs. They help identify gaps, inconsistencies, and emerging trends in the research landscape. However, ensuring the quality and transparency of overviews is crucial for their reliable interpretation and application. Though the PRISMA statement and its various extensions already provide reporting guidelines for various types of SLRs, none of the PRISMA extensions currently cater exclusively to reporting overviews of SLRs. This led to the development of the Preferred Reporting Items for Overviews of Reviews (PRIOR) statement in 2022 to provide a comprehensive framework for reporting overviews of SLRs.[1, 2]

    The PRIOR statement includes a checklist with 27 main items organized into seven sections, namely Title, Abstract, Introduction, Methods, Results, Discussion, and Other Information, each prescribing a critical role in ensuring a comprehensive and transparent overview of systematic reviews. The PRIOR statement recommends that the title of the umbrella reviews must clearly identify the report as an overview of reviews, and the abstract must provide a comprehensive and accurate summary of the purpose, methods, and results of the overview. The introduction must contain the rationale behind conducting the overview, with explicitly stated objectives establishing the study’s context and goals.[3,4]

    The PRIOR statement emphasizes that the methods section of the umbrella review must meticulously cover eligibility criteria, information sources, selection processes, data collection process, and synthesis methods and provide a list of data items. The methods section must also contain the risk of bias assessment and reporting bias assessment, in addition to a certainty assessment. The results section should present exhaustive details, including systematic review characteristics, primary study overlap, risk of bias assessments, and synthesized findings, and also present the details of certainty of evidence. The discussion should critically interpret findings, highlight any discrepancies, discuss limitations, and explore implications for practice, policy, and future research. The “Other Information” section should address essential aspects such as registration, support, competing interests, author contributions, and the availability of data and materials, contributing to the overall integrity and accessibility of the umbrella review.[4]

    The PRIOR statement confers numerous benefits, serving as a crucial tool for robust reporting of umbrella reviews. By providing a standardized checklist, PRIOR ensures improved reporting quality, mitigating the risk of omitting crucial information and fostering consistency across diverse reviews. This not only enhances the overall transparency of research methodologies but also facilitates efficient replication of the review process, contributing to scientific rigor and allowing for timely updates based on emerging evidence. Moreover, PRIOR encourages the use of standardized data extraction forms and tables, streamlining the synthesis process across multiple reviews and resulting in more accurate and reliable conclusions. Ultimately, overviews adhering to PRIOR guidelines become powerful tools for informing evidence-based decision-making in clinical practice, policy formulation, and healthcare resource allocation, amplifying their impact on the healthcare landscape.[5]

    In its present iteration, the PRIOR statement helps in the reporting of umbrella reviews of healthcare interventions only and might not be suitable for other types of umbrella reviews (e.g., qualitative, diagnostic accuracy). Development of extensions to the PRIOR statement, similar to PRISMA extensions, might enhance the usability of the PRIOR statement to other types of reviews as well.[5]

    As evidence synthesis continues to play a pivotal role in healthcare decision-making, the need for standardized reporting has never been more crucial. PRIOR, with its meticulous development process, tailored approach for umbrella reviews, and emphasis on transparency, contributes significantly to the advancement of evidence-based clinical decision-making. Collaboration among researchers, authors, editors, and publishers is imperative to overcome existing challenges and refine the application of reporting guidelines in the dynamic field of healthcare research.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC medical research methodology. 2011 Dec;11(1):1-6.
    2. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. International journal of surgery. 2021 Apr 1;88:105906.
    3. Pollock M, Fernandes RM, Pieper D, et al. Preferred Reporting Items for Overviews of Reviews (PRIOR): a protocol for development of a reporting guideline for overviews of reviews of healthcare interventions. Systematic reviews. 2019 Dec;8:1-9.
    4. Gates M, Gates A, Pieper D, et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ. 2022 Aug 9;378:e070849.
    5. Yang N, Liu H, Zhang K, et al. Viewpoints on the PRIOR statement-a reporting guideline for overviews of reviews. Ann Transl Med. 2023 Mar 15;11(5):230.
  • Automation in Evidence Synthesis: Are We There Yet?

    Automation in Evidence Synthesis: Are We There Yet?

    In the ever-evolving landscape of scientific research and evidence synthesis, the demand for timely, comprehensive, and reliable information has never been greater. Decision-makers, healthcare professionals, and researchers seek up-to-the-minute insights to inform their actions and conclusions. In response to this need, the concept of living systematic literature reviews (SLRs) has emerged, ushering in a new era of continuous evidence updates. However, the question that looms large is whether automation in evidence synthesis has caught up with the pace of this dynamic endeavor.[1]

    Traditional SLRs have long been the gold standard for evidence synthesis. They involve a meticulous and often time-consuming process of gathering, appraising, and synthesizing data to provide a comprehensive overview of a particular topic. Yet, this approach is inherently static, lagging behind the ever-accelerating pace of scientific discovery. Living SLRs, on the other hand, offer a dynamic solution to this problem. These reviews are designed to evolve in parallel with the evidence being generated. They provide a continuous stream of up-to-date information, ensuring that stakeholders have access to the latest insights in real time. This approach is particularly invaluable in fields where the evidence base is rapidly changing, such as public health emergencies or emerging medical treatments.[2,3]

    While the concept of living SLRs is undoubtedly promising, it comes with its own set of challenges. Conducting and maintaining such reviews can be resource-intensive and time-consuming. Reviewers face the formidable task of constantly monitoring newly published research and integrating it into the evolving review. Moreover, the speed at which new studies are published can introduce challenges related to the quality and reliability of the evidence. To address these challenges, automation has emerged as a potential ally. Automation tools have the potential to streamline various aspects of the review process – both traditional SLRs as well as living SLRs. Automation tools can potentially help with multiple steps in an SLR process, including reference retrieval, literature screening, data extraction, quality assessment, data synthesis, and reporting.[2]

    One of the most time-consuming aspects of evidence synthesis is screening references for relevance from an initial pool of potential hits. Automation tools have introduced machine learning algorithms that actively prioritize relevant references. Many such tools implement these algorithms, expediting the screening process and reducing the workload. Additionally, automation tools employ machine learning and neural networks to extract data and predict the risk of bias for randomized controlled trials. These tools enhance the efficiency of data extraction and quality assessment, enabling reviewers to focus on the interpretation of results rather than the mechanics of data extraction. Automation also plays a crucial role in disseminating living evidence. [4-6]

    Crowd-sourcing platforms have the potential to alleviate the burden on reviewers by outsourcing specific review tasks to students, researchers, or interested citizens. In addition to these specific SLR steps, many automation tools have been developed as web-based applications to support SLR workflows. They can automatically search databases like PubMed, pulling in references at regular intervals based on user-defined search strategies. This feature alone significantly reduces the manual effort required for reference retrieval.[3-5]

    While automation tools have made significant headway in supporting the SLR process, challenges and opportunities lie ahead. Integration between tools to facilitate data synthesis remains a notable gap. Different review topics may require tailored synthesis methods, and interoperability between tools is crucial to ensure a seamless flow of data between stages. In addition to this integration challenge, there is a pressing need to develop automation methods that can retrieve evidence from a broader range of data sources, including preprint servers. Additionally, tools must be transparent and well-validated, instilling trust in their reliability. Moreover, the legal and ethical aspects of sharing raw data, especially before formal publication, present challenges. Ensuring the quality and clarity of preprints is essential to prevent misinformation.[1]

    The increasing complexity of automation in the SLR process can potentially hamper the reproducibility of the research: novel solutions are needed to mitigate this concern and ensure consistent and replicable reviews. It is also equally important to consider if automated tools can accurately match human reviewers’ discernment in terms of extracting data, especially when human reviewers can uncover subtle insights and biases. All said, the current capacity of these automation tools to produce human-comparable insights remains challenging, particularly in nuanced aspects such as interpreting conflicting study results and evaluating qualitative research quality, where human judgment adds critical context and depth. Furthermore, compatibility with established values, such as rigor and transparency, is essential, emphasizing the need to double-check automated outputs for reproducibility and ensure transparency for accountability. Lastly, there is a pervasive skepticism and mistrust surrounding automation’s ability to replicate human judgment and value-based decisions. This skepticism underscores the necessity for human oversight and control as automation capabilities evolve. [1]

    The journey toward achieving a harmonious synergy between automation and evidence synthesis is ongoing. With each step forward, we move closer to a future where decision-makers, healthcare professionals, and researchers can access the latest evidence at the speed of discovery. While we may not be there just yet, the path ahead holds great promise for the field of evidence synthesis and its ability to inform critical decisions in an ever-changing world of knowledge.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Arno A, Elliott J, Wallace B, et al. The views of health guideline developers on the use of automation in health evidence synthesis. Systematic Reviews. 2021 Dec;10:1-0.
    2. Simmonds M, Elliott JH, Synnot A, et al. Living Systematic Reviews. Methods Mol Biol. 2022;2345:121-134.
    3. Schmidt L, Sinyor M, Webb RT, et al. A narrative review of recent tools and innovations toward automating living systematic reviews and evidence syntheses. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 2023 Aug 16.
    4. Van Altena AJ, Spijker R, Olabarriaga SD. Usage of automation tools in systematic reviews. Research synthesis methods. 2019 Mar;10(1):72-82.
    5. Marshall IJ, Wallace BC. Toward systematic review automation: a practical guide to using machine learning tools in research synthesis. Systematic reviews. 2019 Dec;8:1-0.
    6. Khalil H, Ameen D, Zarnegar A. Tools to support the automation of systematic reviews: a scoping review. Journal of Clinical Epidemiology. 2022 Apr 1;144:22-42.
  • Elevating Evidence Synthesis: Unveiling the Research Integrity Assessment (RIA) Tool

    Elevating Evidence Synthesis: Unveiling the Research Integrity Assessment (RIA) Tool
    Research Integrity Assessment (RIA) Tool

    The bedrock of evidence-based decision-making rests on the integrity of systematic reviews, which rely on the credibility of the studies they encompass. While bias assessment and risk of bias evaluation are crucial steps in ensuring study quality, the need to go beyond these measures has become increasingly evident. Enter the Research Integrity Assessment (RIA) tool, a groundbreaking approach that aims to safeguard the authenticity and reliability of studies included in evidence synthesis.[1-4]

    The RIA tool is not a mere supplement to the traditional “Risk of Bias” assessment; it is a distinct and innovative framework designed to establish the integrity and authenticity of studies. RIA meticulously scrutinizes various aspects of study conduct, ranging from retraction notices and prospective trial registration to ethics approval, authorship, and the plausibility of methods and results. By doing so, RIA addresses concerns related to scientific misconduct, poor research practices, and potential biases that may distort evidence synthesis findings.[1, 5]

    Timing is crucial when implementing RIA within evidence synthesis. RIA is best employed early in the review process, particularly for randomized controlled trials (RCTs) that have passed the initial PICO (participants, intervention, comparator, and outcomes) eligibility screening. This proactive approach allows for the early exclusion of problematic RCTs, ensuring the integrity of the entire study pool and all subsequent analyses.[5]

    The workflow of RIA involves a hierarchical assessment through the six domains: Retracted studies, absence of prospective registration, inadequate ethical approval without informed written consent, discrepancies within the author group and study location, insufficient randomization, and implausible study results should result in the exclusion of an RCT. Should any concerns arise within any domain, the study is categorized as “awaiting classification,” prompting further scrutiny. If no concerns persist across all domains, or if issues are adequately addressed through correspondence with study authors, the RCT meets the criteria for inclusion in the review and can proceed to the next stages. In the context of living systematic reviews, both included RCTs and those labeled as “awaiting classification” need to be re-evaluated for potential retraction notices., culminating in a decision regarding a study’s eligibility.[5]

    For the RIA assessment, a collaborative and thorough approach is essential. Each study should be independently evaluated by two review authors, and any discrepancies should be resolved through discussions. A diverse team of researchers with expertise in clinical trial design, systematic review methodology, and clinical content should carry out the RIA assessment.[5]

    A hallmark of the RIA tool is its emphasis on transparency and documentation. An Excel-based format of the RIA tool contains critical signaling questions and columns for summarizing conclusions for each domain. The resulting table not only justifies the decision on research integrity and eligibility but also offers an accessible way to document the review authors’ assessments and judgments. This information should be made readily available through publication as a supplement to the systematic review or through online repositories.[5]

    The RIA tool represents a significant step towards a standardized approach for identifying and managing problematic studies within evidence synthesis. While the tool’s development was prompted by the challenges posed by the COVID-19 pandemic, its potential extends far beyond this context. As the systematic review landscape continues to evolve, the iterative refinement and validation of RIA offer a promising avenue for enhancing the credibility, reliability, and ethical soundness of evidence synthesis. To our collective commitment, the RIA tool stands as a testament, demonstrating our dedication to maintaining research integrity and furthering the quest for impartial knowledge, all for the advancement of society’s well-being.[5]

    In the ever-evolving landscape of evidence synthesis, the Research Integrity Assessment (RIA) tool emerges as a beacon of hope. By providing a systematic and comprehensive framework to assess research integrity, RIA adds a crucial layer of protection against distorted findings and compromised recommendations.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Ioannidis JPA. Hundreds of thousands of zombie randomised trials circulate among us. Anaesthesia. 2021 Apr;76(4):444-447.
    2. Soares-Weiser K, Lasserson T, Jorgensen KJ, et al. Policy makers must act on incomplete evidence in responding to COVID-19. Cochrane Database Syst Rev. 2020 Nov 20;11(11):ED000149.
    3. Avenell A, Stewart F, Grey A, Gamble G, Bolland M. An investigation into the impact and implications of published papers from retracted research: systematic search of affected literature. BMJ Open. 2019 Oct 30;9(10):e031909.
    4. Higgins JP, Altman DG, Gøtzsche PC, et al. Cochrane Bias Methods Group; Cochrane Statistical Methods Group. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011 Oct 18;343:d5928.
    5. Weibel S, Popp M, Reis S, et al. Identifying and managing problematic trials: A research integrity assessment tool for randomized controlled trials in evidence synthesis. Res Synth Methods. 2023 May;14(3):357-369.
  • Matching-Adjusted Indirect Comparisons (MAICs): What, Why, and How?

    Matching-Adjusted Indirect Comparisons (MAICs): What, Why, and How?

    Meta-analysis is crucial in evidence-based medicine as it combines data from multiple studies for more precise treatment effect estimates. However, when head-to-head clinical trials directly comparing treatments are scarce, Indirect Treatment Comparisons (ITC) become valuable by offering insights through common comparators. The conventional approaches to ITCs hinge on aggregate data, assuming a uniform distribution of effect-modifying variables across trials. The emergence of the Matching-Adjusted Indirect Comparison (MAIC) methodology, which challenges these assumptions, is gaining momentum, particularly in submissions to reimbursement organizations.[1-3]

    MAICs are an extension of the traditional ITC method, developed with the aim of addressing some of the limitations of traditional ITCs, particularly the issue of confounding by patient characteristics. MAICs attempt to make the compared treatment groups more comparable by adjusting for patient-level characteristics that may influence treatment outcomes. MAICs offer a unique vantage point within Health Technology Assessment (HTA) submissions, amalgamating unadjusted ITC outcomes, even when relative treatment efficacy appears modest. This method aims to minimize bias, facilitating a fair and nuanced comparison of therapies akin to real-world scenarios.[4-6]

    MAICs are grounded in individual-level patient data (IPD) from an intervention trial (e.g., manufacturer’s product) and published aggregate data from the comparator’s trial, and seek equilibrium by reweighting IPD patient characteristics. Techniques such as propensity scores derived from moment methods or entropy balancing play a pivotal role in this equilibrium, ensuring the reweighted IPD outcomes are juxtaposed against published aggregate data to discern relative impact.[7]

    MAICs predominantly operate within an “anchored” framework, often relying on a shared comparator (e.g., placebo) to ground comparisons. This approach, common in connected networks that account for randomization, shields estimations from the sway of imbalanced prognostic factors. Nonetheless, empirical evidence or clinical insight must substantiate effect modification claims. Conversely, the “unanchored” MAIC takes center stage in disconnected networks lacking a common comparator, directly juxtaposing reweighted IPD outcomes and published aggregate data. Rigorous estimates of absolute effects and vigilant control of prognostic and effect-modifying factors are prerequisites for unanchored comparisons, while lurking unobserved confounding remains challenging due to a lack of randomization. Fundamentally, anchored MAICs illuminate treatment impact, whereas unanchored variants scrutinize outcomes across trials.[6,7]

    MAICs often have a lower risk of confounding because of the matching of patients based on key characteristics; for the same reason, potential bias from differences between the treatment groups in the original trials is also lower with MAICs. Further, since MAIC creates a more balanced comparison by aligning patient characteristics, treatment estimates are often more robust and reliable than conventional ITCs. However, MAICs also have certain limitations pertaining to the availability of suitable IPD, the potential of selection bias of patient data, quality and completeness of IPD, and challenges related to assumptions and extrapolations. While MAIC employs individual-level patient data (IPD) to mitigate observed differences, unobserved disparities can lead to residual confounding. Even when placebo-arm outcomes are balanced, unobserved factors affecting treatment outcomes but not placebo outcomes can bias comparisons. Practical challenges include the need for matched outcome definitions and inclusion/exclusion criteria and the inability to fit or calibrate propensity score models using aggregate data. Balancing multiple baseline factors relies on an adequate number of patients with IPD, which can reduce the adequate sample size. MAIC may be utilized for single-arm trials, but the absence of a common comparator limits validation. Irreconcilable differences in trial design or patient characteristics might exclude trials from analysis, necessitating a trade-off between evidence inclusion and reducing heterogeneity. Sensitivity analyses are crucial to assessing the impact of trial inclusion/exclusion on results.[8,9]

    In a landscape where clinical decision-making hinges on robust evidence, MAIC is a valuable tool, offering unique perspectives and cautionary lessons. As researchers, practitioners, and evaluators continue to explore the horizons of evidence synthesis, the pursuit of accuracy, transparency, and informed choices remains paramount. By embracing the insights and addressing the limitations of MAIC, we inch closer to a comprehensive understanding of treatment landscapes and forge a path toward more informed and patient-centered healthcare decisions.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Ahn E, Kang H. Introduction to systematic review and meta-analysis. Korean J Anesthesiol. 2018 Apr;71(2):103-112.
    2. Jansen JP, Fleurence R, Devine B, et al. Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 1. Value Health. 2011 Jun;14(4):417-28.
    3. Veroniki AA, Straus SE, Soobiah C, et al. A scoping review of indirect comparison methods and applications using individual patient data. BMC Med Res Methodol. 2016 Apr 27;16:47.
    4. Phillippo DM, Ades AE, Dias S, et al. Methods for Population-Adjusted Indirect Comparisons in Health Technology Appraisal. Med Decis Making. 2018 Feb;38(2):200-211.
    5. Phillippo DM, Dias S, Elsada A, et al. Population Adjustment Methods for Indirect Comparisons: A Review of National Institute for Health and Care Excellence Technology Appraisals. Int J Technol Assess Health Care. 2019 Jan;35(3):221-228.
    6. Thom H, Jugl SM, Palaka E, Jawla S. Matching adjusted indirect comparisons to assess comparative effectiveness of therapies: usage in scientific literature and health technology appraisals. Value in Health. 2016 May 1;19(3):A100-1.
    7. Petto H, Kadziola Z, Brnabic A, et al. Alternative Weighting Approaches for Anchored Matching-Adjusted Indirect Comparisons via a Common Comparator. Value Health. 2019 Jan;22(1):85-91.
    8. Signorovitch JE, Sikirica V, Erder MH, et al. Matching-adjusted indirect comparisons: a new tool for timely comparative effectiveness research. Value in Health. 2012 Sep 1;15(6):940-7.
    9. Jiang Y, Ni W. Performance of unanchored matching-adjusted indirect comparison (MAIC) for the evidence synthesis of single-arm trials with time-to-event outcomes. BMC Med Res Methodol. 2020 Sep 29;20(1):241.
  • Collaborative Approach in Conducting Systematic Literature Reviews For Evidence Synthesis

    Collaborative Approach in Conducting Systematic Literature Reviews For Evidence Synthesis

    A well-conducted systematic review and meta-analysis can be invaluable to help clinicians stay up-to-date on current evidence-based medicine.(1) However, systematic reviews and meta-analyses often tend to be highly focussed on a specific research question, and as a result not broad enough to be equally useful for all stakeholders, especially in topics of broad public health importance with multiple facets involved in policy-level decision-making process. Furthermore, systematic reviews are frequently conducted by small teams of researchers, usually from a single or few institutions; while this can ensure quicker completion of the research, research resulting from a smaller team can suffer from disadvantages such as lack of diversity, limited expertise of team members, higher risk of bias, subjectivity and methodological errors, and an overall lack of generalizability.(2)

    A coordinated systematic review model called the “collaborative review model” proposed by Hayden et al can address the challenges posed by the process taken up for conducting conventional systematic reviews. The collaborative review model is a relatively new approach developed for areas with significant research material on a specific health condition.(3) This approach intends to divide a single systematic review topic into focused sub-reviews using homogeneous methods and tools and by sharing data among the team members. Collaborative input on method decisions is supported by comprehensive guidance documents shared across the network and multifaceted strategies for effective communication. Collaboration is supported by a well-defined project management structure, efficient communication strategies, and the collective harnessing of resources and skills.(3)

    The collaborative review model enables team coordination and collaboration, frequent expert discussions, coordinated literature searching across a broader topic, and consistency in data handling and analytic methods. The division of large reviews into smaller, focused sub-reviews allows for increased efficiency and faster completion of reviews. By involving multiple reviewers, these reviews can minimize the risk of bias and enhance the reliability of findings. Moreover, with the use of advanced comparative and multivariable analyses, including network meta-analyses, collaborative reviews provide a comprehensive understanding of treatment effects. These analyses can offer valuable insights into treatment options and comparative effectiveness.(3)

    The collaborative review approach ensures a more thorough and accurate assessment of the evidence by incorporating standardized data collection forms and consistent data handling to address discrepancies. Through task coordination and resource sharing, collaborative review approach optimizes the allocation of research resources and enhances the overall efficiency of evidence synthesis. By establishing standardized protocols, guidelines, and workflows, this approach ensures methodological consistency across review teams. Furthermore, collaborative reviews bring together the expertise of large international collaborators, promoting capacity-building and mentorship opportunities for new reviewers.(3)

    This collaborative review approach is not without challenges. The involvement of a large team requires steeper funding requirements for a well-coordinated review to handle large teams, systematic review tools, and good project management. Next, given the huge number of contributors involved in such a review model, maintaining a transparent process for authorship and acknowledgment of the multiple outputs poses another significant challenge.(3)

    The collaborative review model has the potential to address many barriers to getting evidence into policy by drawing on the strengths of pre-existing different approaches to evidence synthesis. The effectiveness of the collaborative review model with enhanced quality control measures provides a standardized approach to collating and summarising large volumes of evidence for policy makers for any policy topic area.

    Become A Certified HEOR Professional – Enrol yourself here!

    References:

    1. Tawfik GM, Dila KA, Mohamed MY, et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Tropical medicine and health. 2019 Dec;47(1):1-9.
    2. Créquit P, Trinquart L, Yavchitz A, Ravaud P. Wasted research when systematic reviews fail to provide a complete and up-to-date evidence synthesis: the example of lung cancer. BMC medicine. 2016 Dec;14(1):1-5.
    3. Hayden JA, Hayden JA, Ogilvie R, et al. Commentary: collaborative systematic review may produce and share high-quality, comparative evidence more efficiently. Journal of clinical epidemiology. 2022 Sep 28.