• An Overview of Methods for Indirect Treatment Comparisons in Healthcare Decision-making

    An Overview of Methods for Indirect Treatment Comparisons in Healthcare Decision-making

    Meta-analyses summarize data from head-to-head trials to evaluate pairs of treatments that have been directly compared.[1] However, in certain circumstances, multiple therapies are of interest, and no data is available on their direct comparison. In such cases, indirect treatment comparison (ITC) is performed for synthesizing evidence surrounding treatments of interest.[2] ITC assumes that the studies are similar and homogenous regarding the administered therapies, patient characteristics, and observed effects, and works best when the inconsistency between indirect and direct evidence is minimal or absent. 

    Various methods of ITC have been developed depending on the availability of individual patient data (IPD) and summary level data (SLD). Some of these methods include naïve ITC, network meta-analysis (NMA), population-adjusted indirect comparisons (PAIC), simulated treatment comparisons (STCs), and matching-adjusted indirect comparisons (MAICs). The choice of these methods depends on the study design, the number of comparators available, and the degree to which the outcomes are measured. In addition, the extent of assumptions employed, methodological limitations, and inherent biases associated with each method also determine the choice of the ITC method.

    Naïve ITC is based on SLD and is used when the treatments cannot be connected by a common comparator. It does not account for heterogeneity and excludes information from the placebo arms when comparing treatments, thereby introducing bias. Hence, this method is mainly avoided to preserve the randomization in trials during the analyses. 

    Network Meta-Analysis (NMA) is perhaps the most popular of the ITC methods. It works with SLD, and compares treatments by combining indirect and direct evidence connected by a network of studies.[3] NMA is considered as the gold standard for ITCs. It offers more an exact estimate of the relative effects of treatments in the network than a single direct or indirect estimate. It also enables for the assessment of intervention ranking and hierarchy. To some extent, this bias can be reduced by using meta-regression, that addresses heterogeneity in treatment effects. It can assess how the effect of treatment changes with a covariate (a patient or methodological attribute). Unfortunately, the usage of meta-regression with NMAs becomes questionable in cases where the number of studies in a network is limited. Furthermore, this approach can only be used when there is a variance study or comparison with only minor variations in impact modifiers.[4,5] Moreover, covariate correction in aggregate-level data may result in ecological bias, which limits the interpretation of estimated results for subgroups. In such cases, Individual Patient Data (IPD) provides adjustments for covariates that cause inconsistencies (e.g., prognostic factors, effect modifiers, etc.). Hence, NMA that leverages IPD can be put to use for conducting analyses that can provide adjustments to reduce such inconsistencies.[6] 

    The application of NMAs and their associated methods are often limited by insufficient evidence networks and heterogeneity across trials. This is resolved to certain extent through population-adjusted indirect comparison (PAIC), which is a targeted approach to enhance ITC.[7] It allows to overcome the challenges faced by NMAs by carrying out a targeted comparison between outcomes for specific treatments and factors. It includes two methods: simulated treatment comparisons (STCs) and matching-adjusted indirect comparisons (MAICs). These methods can help reduce the ambiguity in the comparisons with statistical adjustment. STCs do this by applying predictive equations, whereas MAIC relies on patient reweighting. 

    STCs or MAICs can be used to conduct either “anchored” indirect comparison, in which each trial has a common comparator arm, or “unanchored” indirect comparison, in which the treatment network is disconnected (single-arm investigations). An anchored approach relies on “conditional constancy of relative effects”. In contrast, an unanchored approach works on a stringent assumption of “conditional constancy of absolute effects”. which is more demanding than the former and is not a widely accepted approach [8]  STCs are often appropriate in analyses where numerous comparators are available for a small set of outcomes, whereas MAICs are often suitable in cases with only one comparator but multiple outcomes. The precision of the equations in STC and the effective matching of populations in MAIC determine the dependability of the studies.

    A task force report released in 2011 by the Professional Society for Health Economics and Outcomes Research (ISPOR) defines the fundamentals of conducting ITCs and assessing these studies for informed and efficient decision-making.[4,5] Though the methodological aspects of NMAs have received much attention from researchers, the other ITC methods are yet to be refined to a similar extent. The standardization of these methods is vital to increase their reliability and application.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    [1] Ahn E, Kang H. Introduction to systematic review and meta-analysis. Korean J Anesthesiol. 2018 Apr;71(2):103-112. doi: 10.4097/kjae.2018.71.2.103.  [2] Veroniki AA, Straus SE, Soobiah C, et al. A scoping review of indirect comparison methods and applications using individual patient data. BMC Med Res Methodol. 2016 Apr 27;16:47. doi: 10.1186/s12874-016-0146-y.  [3] Tonin FS, Rotta I, Mendes AM, Pontarolo R. Network meta-analysis: a technique to gather evidence from direct and indirect comparisons. Pharm Pract (Granada). 2017 Jan-Mar;15(1):943. doi: 10.18549/PharmPract.2017.01.943 [4] Jansen JP, Fleurence R, Devine B, et al. Interpreting indirect treatment comparisons and network meta-analysis for health-care decision making: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 1. Value Health. 2011 Jun;14(4):417-28. doi: 10.1016/j.jval.2011.04.002.  [5] Hoaglin DC, Hawkins N, Jansen JP, et al. Conducting indirect-treatment-comparison and network-meta-analysis studies: report of the ISPOR Task Force on Indirect Treatment Comparisons Good Research Practices: part 2. Value Health. 2011 Jun;14(4):429-37. doi: 10.1016/j.jval.2011.01.011.  [6] Riley RD, Dias S, Donegan S, et al. Using individual participant data to improve network meta-analysis projects. BMJ Evid Based Med. 2022 Aug 10:bmjebm-2022-111931. doi: 10.1136/bmjebm-2022-111931. Epub ahead of print.  [7] Phillippo DM, Dias S, Elsada A, et al. Population Adjustment Methods for Indirect Comparisons: A Review of National Institute for Health and Care Excellence Technology Appraisals. Int J Technol Assess Health Care. 2019 Jan;35(3):221-228. doi: 10.1017/S0266462319000333. Epub 2019 Jun 13.  [8] Jiang Y, Ni W. Performance of unanchored matching-adjusted indirect comparison (MAIC) for the evidence synthesis of single-arm trials with time-to-event outcomes. BMC Med Res Methodol. 2020 Sep 29;20(1):241. doi: 10.1186/s12874-020-01124-6.

  • Comparative Effectiveness in Real-World Settings through Pragmatic Clinical Trials

    Comparative Effectiveness in Real-World Settings through Pragmatic Clinical Trials

    Randomized controlled trials (RCTs) are the mainstay of clinical research; it is estimated that about 18,000 RCTs are published each year. However, traditional RCTs usually take a long time to complete, are expensive, and the results are challenging to generalize to the real-world since they are derived under ideal conditions with strict inclusion and exclusion criteria. This brings in an additional layer of complexity for decision-making by the healthcare stakeholders and reimbursement authorities. With an intention to resolve this challenge, fuelled by the increasing global shift towards personalized medicine and value-based payment models, new methods for generating efficacy and safety evidence of interventions in real-world settings are continuously sought after.[1,2]

    This has been a major driver for a rapid increase in interest in comparative effectiveness research (CER), which aims to compare the benefits, risks, and sometimes costs of alternative healthcare interventions (medicines, medical devices, procedures and health services) in real-world settings. CER aims to assist consumers, clinicians, purchasers, and policymakers in making informed decisions, thereby improving healthcare at both the individual and population levels.[3]

    CER was brought into spotlight with the introduction of the American Recovery and Reinvestment Act in 2009 which provided support of $1.1 US billion over 2 years for conducting CER. This stimulated an increase in the observational studies in the short run and conducting RCTs in the long run.[4] In 2020, the Patient-Centered Outcomes Research Institute (PCORI) had invested nearly $2.6 billion in more than 700 patient-centered CER studies in the USA.[5] In 2011, it was estimated that CER may contribute to a $31.6 billion reduction in research and development costs over a 10 year period by improving market access and reimbursement from private insurers.[6]

    CER involves non-inferiority trials between two interventions having similar therapeutic effects differing in other aspects relevant to stakeholders like costs, adverse effect profile, and route of administration. Among several trial designs, key trial design proposed in CER is Pragmatic Clinical Trials (PCT). PCTs are in fact RCTs, conducted in a real-world setting: the evidence generated through PCTs can be translated into patient care more efficiently and with a better generalizability. While traditional RCTs use a placebo or well-controlled alternative intervention in a tightly controlled study setting, PCTs are intended to maintain the internal validity of RCTs and maximize the external validity (generalizability and applicability). PCTs are designed and implemented in ways that would better address the demand for evidence about real-world risks, and benefits for informing clinical and health policy decisions.[4,7]

    PCTs are gaining traction in CER due to their potential to efficiently generate evidence to inform real-world health care decisions by embedding research into routine care with the goals of implementation research. A notable example of CER through PCTs is the ALLHAT trial reported in 2004, which concluded that thiazides are as effective as ACE inhibitors in the management of Hypertension.[8] Similarly, the 2006 CATIE trial reported that atypical antipsychotics are ineffective compared to placebo in elderly patients with dementia.[8]

    Despite its advantages, the “embedded” nature of PCTs (i.e RCTs embedded in real-world setting) faces ethical and regulatory challenges. Existing GCP guidelines intended for traditional RCTs are insufficient in the areas of PCTs, and appropriate reforms are needed that are relevant to conduct PCTs.[9] The design and quality of a CER depend on the proper choice of the non-inferiority margin. However, defining the non-inferiority margin can be complex and quite challenging. Attrition bias adds to these complexities. These concerns are being sorted out with the evidence from previous studies, preliminary data, and/or clinical judgment that are very helpful in allowing the trialists to make reasonable assumptions about an anticipated effect of the reference treatment.[10,11]

    Challenges also exist in sustaining the behavioural change following decisions from CER. For example, based on CER, a clinical decision was made not to use stents for stable angina: this reduced stent implants by 13% in the US for 4 years; however, by 2009, the number of implants returned back to previous levels.[12]

    Development of proper regulatory standards can enable the realization of the full potential of CER conducted through pragmatic trials to fill the research-practice gap in healthcare decision-making, reduce variability in clinical practice, and determine the high-quality care for all patients. Indeed, CER has the potential to provide the best possible treatment choices to the patients and the healthcare providers.

    Become A Certified HEOR Professional – Enrol yourself here!

    References:

    1. https://www.asianhhm.com/healthcare-management/decision-based-evidence-making
    2. Alsop J et al. The mixed randomized trial: combining randomized, pragmatic and observational clinical trial designs. Journal of Comparative Effectiveness Research. 2016;5(6):569-579.
    3. Dang A, Kaur K. Comparative effectiveness research and its utility in In-clinic practice. Perspectives in Clinical Research. 2016;7(1):9.
    4. Mullins C et al. Generating Evidence for Comparative Effectiveness Research Using More Pragmatic Randomized Controlled Trials. PharmacoEconomics. 2010;28(10):969-976.
    5. https://www.pcori.org/news-release/pcori-board-approves-new-150-million-initiative-fund-large-scale-patient-centered-clinical-studies. 2020.
    6. https://www.kff.org/wp-content/uploads/sites/3/2011/05/cer_paper_final.pdf
    7. Chalkidou K et al. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clinical Trials. 2012;9(4):436-446.
    8. Schneeweiss S. Developments in Post-marketing Comparative Effectiveness Research. Clinical Pharmacology and Therapeutics. 2007;82(2):143-156.
    9. Mentz R et al. Good Clinical Practice Guidance and Pragmatic Clinical Trials. Circulation. 2016;133(9):872-880.
    10. Colditz G, Winter A. Clinical trial design in the era of comparative effectiveness research. Open Access Journal of Clinical Trials. 2014;:101.
    11. Siegel J et al. Comparative Effectiveness Research in the Regulatory Setting. Pharmaceutical Medicine. 2012;26(1):5-11.
    12. Kupersmith J, Ommaya A. The Past, Present, and Future of Comparative Effectiveness Research in the US Department of Veterans Affairs. The American Journal of Medicine. 2010;123(12):e3-e7.