• Impact of Quantitative Bias Analysis (QBA) in External Control Arms

    Impact of Quantitative Bias Analysis (QBA) in External Control Arms

    Impact of Quantitative Bias Analysis (QBA) in External Control Arms

    In recent years, external control arms (ECAs) are being increasingly used in clinical research, particularly in cases where randomized controlled trials (RCTs) are not feasible or appropriate. Real-world data (RWD) from patient registries, electronic health records (EHRs), and other observational sources are the basis of ECAs, thus offering critical reference points for single-arm studies.(1, 2) However, they can significantly impact the validity of ECAs due to the risk of unmeasured confounding and systematic biases.(3) ECAs are not randomized like RCTs, which makes them fundamentally susceptible to bias owing to the differences in patient characteristics, treatment practices, and methods of data collection.(4) To address this challenge, quantitative bias analysis (QBA) can be a beneficial methodological tool.(5)

    QBA enables researchers to systematically model and calculate the possible impact of unmeasured confounding on treatment effect estimates. Instead of solely depending on conventional sensitivity analyses or assuming that statistical adjustments for measured variables are adequate, QBA offers a systematic approach to estimate how much an unobserved confounder would require to control both treatment and outcome to justify an observed treatment effect. This quantitative perspective adds refinement and transparency to studies involving ECA, particularly in cases where the severity and direction of potential bias is assessed or bounded. The increasing realization of QBA’s applicability has persuaded its implementation in regulatory and health technology assessment (HTA) scenarios, where decision-makers need robust, reliable evidence for decisions on policy and reimbursement.(5, 6)

    The Q-BASEL study (Quantitative Bias Analysis in External Control Arms for Standardization and Evidence Level) is a crucial milestone in depicting the utility of QBA in practice. This study comprehensively employed QBA techniques across several real-world ECA cases to estimate the possible impact of unmeasured confounding on the observed treatment effects. By incorporating probabilistic bias analysis and deterministic sensitivity states, the researchers were able to measure the strength of the findings from these ECAs. The results of the Q-BASEL study emphasized that even moderate confounding could drastically change the understanding of treatment benefit, while also exhibiting robustness of observed treatment effect under reasonable bias assumptions in several cases. This strengthened the value of QBA as both a diagnostic method and a reliability enhancer for ECA-based evidence.(6)

    Notably, the Q-BASEL study encourages a mindset shift, from trying to remove all bias to explicitly measuring it and putting its effect into perspective. Embracing the inherent uncertainty of RWD empowers researchers and reviewers to make more informed decisions. The study also highlights the need for transparency in assumptions, appropriate documentation of parameter scales, and stakeholder communication, all of which are vital for QBA to significantly help in decision-making. Additionally, its impact extends beyond oncology and rare diseases, two fields with frequent adoption of ECAs, to larger treatment landscapes with rising RWD integration.(6)

    With increasing use of ECAs in regulatory and HTA decisions, QBA offers a transparent, scientific method to evaluate whether treatment effect estimates may be affected by bias rather than depicting a real clinical benefit. The Q-BASEL study shows how this can be assessed with practical and reproducible approaches. With rising expectations for transparency and methodological robustness, integrating QBA into RWE studies can potentially become standard practice.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Sosinsky AZ, Parzynski CS, Casso D. The Role of External Control Arms in Drug Development and Considerations for Success. ISPOR – Value & Outcomes Spotlight. 2024; 10(4).
    2. Mishra-Kalyani PS, Amiri Kordestani L, Rivera DR, et al. External control arms in oncology: current use and future directions. Ann Oncol. 2022; 33(4):376-383.
    3. Quantifying Bias in Real-World Studies: A New Hope for RWD Acceptance or Are HTAers Gonna Hate? ISPOR. 2022. [Accessed online on 10th July 2025]. Available at: https://www.ispor.org/docs/default-source/euro2022/ispor-eu-2022—qbasel-symposium—v1.pdf?sfvrsn=d14165ea_0
    4. Khachatryan A, Read SH, Madison T. External control arms for rare diseases: building a body of supporting evidence. J Pharmacokinet Pharmacodyn. 2023; 50:501-506.
    5. Thorlund K, Duffield S, Popat S, et al. Quantitative bias analysis for external control arms using real-world data in clinical trials: a primer for clinical researchers. J Comp Eff Res. 2024; 13(3):e230147.
    6. Gupta A, Hsu G, Kent S, et al. Quantitative Bias Analysis for Single-Arm Trials With External Control Arms. JAMA Netw Open. 2025; 8(3):e252152.
  • Measuring the Invisible: Quantifying Bias in Real-World Evidence

    Measuring the Invisible: Quantifying Bias in Real-World Evidence

    Economic Impact of Climate Change on Health Systems

    In real-world evidence (RWE) studies, bias is a constant phenomenon, often driven by methods of patient selection, data capture, and defining outcomes.(1) Unlike randomized controlled trials (RCTs) that are designed to minimize systematic error, RWE studies often compete with the routine clinical practice and administrative data. Therefore, more than identifying bias; quantifying the distorted results is crucial.(1-4)

    Quantifying bias is different from simply adjusting for known issues. Quantification enables the estimation of the possible impact of systematic error on effect estimates, providing transparency and a guideline to evaluate the relaibility of findings. Several methods have been developed to support this.(5, 6)

    Quantitative Bias Analysis (QBA) is one of the most organized methodologies. It unambiguously mimics the direction and magnitude of bias through assumptions about misclassification, selection, or measurement error. Deterministic QBA offers corrected estimates with fixed scenarios, while probabilistic QBA helps describe the uncertainty in bias parameters through simulations. QBA essentially gives decision-makers a range of possible outcomes reflecting real-world imperfections, rather than giving a single ‘true’ estimate.(5-7)

    Another distinct tool is the E-value, which measures the minimum strength for an unmeasured bias, supported with both the exposure and outcome, to fully justify an observed correlation.(4, 5) A higher E-value represents more robust results against an unmeasured bias. E-values, although more commonly used for unmeasured confounding, can be a general standard for result stability across multiple types of bias.(5, 7, 8)

    Sensitivity analyses are supplementary to these tools as they assess the variability of results under different assumptions.(9) Whether it’s reconsidering exposure time windows, adjusting definitions of outcomes, or simulating different missing data scenarios, sensitivity analyses help assess the validity of a study’s conclusions under reasonable alternative conditions.(5, 7, 9)

    Negative control analyses also serve as an important tool to identify hidden bias. These work by causing unrelated exposures or outcomes for researchers to notice residual bias, which can often be missed by standard models. Spotting a signal where none should exist causes concerns for data validity or procedural flaws.(5-7)

    All these quantifying techniques don’t aim to eliminate bias, but rather to make its impact noticeable. For instance, all these parameters might show different levels of biases; yet, together, they move the interpretation from binary “yes/no” inferences to informed judgments about the amount of confidence in a particular evidence.(5-7)

    As RWE increasingly becomes an important driver of regulatory and clinical decisions, quantifying bias should become a routine practice. It facilitates transparency, duplicability, as well as more detailed conversations about evidence quality.

    Become A Certified HEOR Professional – Enrol yourself here!

    References

    1. Gokhale M, Stürmer T, Buse JB. Real-world evidence: the devil is in the detail. Diabetologia. 2020; 63:1694-1705.
    2. Kim HS, Kim JH. Proceed with Caution When Using Real World Data and Real World Evidence. J Korean Med Sci. 2019 Jan 16;34(4):e28.
    3. Maihöfner C, Mallick-Searle T, Vollert J, Kalita P, Sood Sethi V. Review of Challenges in Performing Real-World Evidence Studies for Nonprescription Products. Pragmat Obs Res. 2025; 16:7-18.
    4. Bykov K, Patorno E, D’Andrea E, et al. Prevalence of Avoidable and Bias-Inflicting Methodological Pitfalls in Real-World Studies of Medication Safety and Effectiveness. Clin Pharmacol Ther. 2022; 111(1):209-217.
    5. Petersen JM, Ranker LR, Barnard-Mayers R, MacLehose RF, Fox MP. A systematic review of quantitative bias analysis applied to epidemiological research. Int J Epidemiol. 2021 Nov 10;50(5):1708-1730.
    6. Shi X, Liu Z, Zhang M, Hua W, Li J, Lee JY, Dharmarajan S, Nyhan K, Naimi A, Lash TL, Jeffery MM, Ross JS, Liew Z, Wallach JD. Quantitative bias analysis methods for summary-level epidemiologic data in the peer-reviewed literature: a systematic review. J Clin Epidemiol. 2024 Nov;175:111507.
    7. Ramagopalan S, et al. Quantifying Bias in Real-World Studies: A New Hope for RWD Acceptance or Are HTAers Gonna Hate? [Accessed online on 4th July 2025]. Available at: https://www.ispor.org/docs/default-source/euro2022/ispor-ad-symposium-combined-slides—final-v4.pdf?sfvrsn=9e41ba3_0
    8. Barberio J, Ahern TP, MacLehose RF, et al. Assessing Techniques for Quantifying the Impact of Bias Due to an Unmeasured Confounder: An Applied Example. Clin Epidemiol. 2021; 13:627-635.
    9. Greenland, S. (2014). Sensitivity Analysis and Bias Analysis. In: Ahrens, W., Pigeot, I. (eds) Handbook of Epidemiology. Springer, New York, NY. https://doi.org/10.1007/978-0-387-09834-0_60