Meta-analysis is crucial in evidence-based medicine as it combines data from multiple studies for more precise treatment effect estimates. However, when head-to-head clinical trials directly comparing treatments are scarce, Indirect Treatment Comparisons (ITC) become valuable by offering insights through common comparators. The conventional approaches to ITCs hinge on aggregate data, assuming a uniform distribution of effect-modifying variables across trials. The emergence of the Matching-Adjusted Indirect Comparison (MAIC) methodology, which challenges these assumptions, is gaining momentum, particularly in submissions to reimbursement organizations.[1-3]
MAICs are an extension of the traditional ITC method, developed with the aim of addressing some of the limitations of traditional ITCs, particularly the issue of confounding by patient characteristics. MAICs attempt to make the compared treatment groups more comparable by adjusting for patient-level characteristics that may influence treatment outcomes. MAICs offer a unique vantage point within Health Technology Assessment (HTA) submissions, amalgamating unadjusted ITC outcomes, even when relative treatment efficacy appears modest. This method aims to minimize bias, facilitating a fair and nuanced comparison of therapies akin to real-world scenarios.[4-6]
MAICs are grounded in individual-level patient data (IPD) from an intervention trial (e.g., manufacturer’s product) and published aggregate data from the comparator’s trial, and seek equilibrium by reweighting IPD patient characteristics. Techniques such as propensity scores derived from moment methods or entropy balancing play a pivotal role in this equilibrium, ensuring the reweighted IPD outcomes are juxtaposed against published aggregate data to discern relative impact.
MAICs predominantly operate within an “anchored” framework, often relying on a shared comparator (e.g., placebo) to ground comparisons. This approach, common in connected networks that account for randomization, shields estimations from the sway of imbalanced prognostic factors. Nonetheless, empirical evidence or clinical insight must substantiate effect modification claims. Conversely, the “unanchored” MAIC takes center stage in disconnected networks lacking a common comparator, directly juxtaposing reweighted IPD outcomes and published aggregate data. Rigorous estimates of absolute effects and vigilant control of prognostic and effect-modifying factors are prerequisites for unanchored comparisons, while lurking unobserved confounding remains challenging due to a lack of randomization. Fundamentally, anchored MAICs illuminate treatment impact, whereas unanchored variants scrutinize outcomes across trials.[6,7]
MAICs often have a lower risk of confounding because of the matching of patients based on key characteristics; for the same reason, potential bias from differences between the treatment groups in the original trials is also lower with MAICs. Further, since MAIC creates a more balanced comparison by aligning patient characteristics, treatment estimates are often more robust and reliable than conventional ITCs. However, MAICs also have certain limitations pertaining to the availability of suitable IPD, the potential of selection bias of patient data, quality and completeness of IPD, and challenges related to assumptions and extrapolations. While MAIC employs individual-level patient data (IPD) to mitigate observed differences, unobserved disparities can lead to residual confounding. Even when placebo-arm outcomes are balanced, unobserved factors affecting treatment outcomes but not placebo outcomes can bias comparisons. Practical challenges include the need for matched outcome definitions and inclusion/exclusion criteria and the inability to fit or calibrate propensity score models using aggregate data. Balancing multiple baseline factors relies on an adequate number of patients with IPD, which can reduce the adequate sample size. MAIC may be utilized for single-arm trials, but the absence of a common comparator limits validation. Irreconcilable differences in trial design or patient characteristics might exclude trials from analysis, necessitating a trade-off between evidence inclusion and reducing heterogeneity. Sensitivity analyses are crucial to assessing the impact of trial inclusion/exclusion on results.[8,9]
In a landscape where clinical decision-making hinges on robust evidence, MAIC is a valuable tool, offering unique perspectives and cautionary lessons. As researchers, practitioners, and evaluators continue to explore the horizons of evidence synthesis, the pursuit of accuracy, transparency, and informed choices remains paramount. By embracing the insights and addressing the limitations of MAIC, we inch closer to a comprehensive understanding of treatment landscapes and forge a path toward more informed and patient-centered healthcare decisions.
Become A Certified HEOR Professional – Enrol yourself here!