Today’s era of risk based, precision and personalized medicine demands clinical prediction models. Prediction modelling studies focus on two kinds of outcomes, viz. diagnosis (probability of a condition that is undetected) and prognosis (probability of developing a certain outcome in the future). (1,2) These studies develop, validate, or update a multivariable prediction model, wherein multiple predictors are used in combination to estimate probabilities to inform and often guide individual care. Moreover, evidence from literature shows both prognostic as well as diagnostic models being widely used in various medical domains and settings, (3) such as cancer, (4) neurology, (5) and cardiovascular disease. (6) Increasingly common competing prediction models can exist for the same outcome or target population, which necessitate the systematic reviews of these prediction model studies; since their coexistence may facilitate misperceptions amongst health care providers, guideline developers, and policymakers about which model to use or recommend, and in which persons or settings. (1,7)

Quality assessment is vital while conducting any systematic review, for which several tools are in place that enable the assessment of the risk of bias (ROB). (8) For example, the QUIPS (Quality In Prognosis Studies) tool evaluates the ROB in predictor finding (prognostic factor) studies. (9) Similarly, the revised Cochrane ROB Tool (ROB 2.0) (10) investigates the methodological quality of prediction model impact studies, that use a randomized comparative design, or ROBINS-I (Risk of Bias in Nonrandomized Studies of Interventions) for those incorporating a non-randomized comparative design. (11) Today, prediction model studies as well as their systematic reviews are often being used as evidence for clinical guidance and decision making, which warrants a tool that would facilitate quality assessment for individual prediction model studies. For this purpose, PROBAST (Prediction model Risk Of Bias ASsessment Tool) has been recently introduced. PROBAST came into existence owing to the lack of appropriate tool that would evaluate the ROB for systematic reviews of diagnostic and prognostic prediction model studies. (7,8,12)

Bias is nothing but a systematic error in a study that leads to inaccurate results, thus inhibiting the study’s internal validity. (8) Similarly, inadequacies of the study design, conduct and analysis may often lead to the distorted estimates of model predictive performance, thus facilitating the ROB to occur. Moreover, different populations, predictors, or outcomes of the study than those specified in the review question may give rise to the concerns regarding the applicability of a primary study. PROBAST has been, therefore, developed to address the lack of suitable tools designed specifically to assess ROB and applicability of primary prediction model studies.

Development of PROBAST:

A 4-stage approach for developing health research reporting guidelines was implemented in developing PROBAST. This approach consisted of following stages: 1) defining the scope, 2) reviewing the evidence base, 3) using a Web-based Delphi procedure, and 4) refining the tool through piloting. (8,13) PROBAST was designed mainly to assess primary studies included in a systematic review and not predictor finding or prediction model impact studies. The steering group of 9 experts in prediction model studies and development of quality assessment tools agreed that PROBAST would assess both ROB as well as the concerns regarding applicability of a study evaluating a multivariable prediction model to be used for individualized diagnosis or prognosis. For the first stage, a domain-based structure was adopted to define the scope of PROBAST, similar to that used in other ROB tools, such as ROB 2.0, ROBINS-I, QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2), and ROBIS. A total of 3 approaches were used to build an evidence base as part of the second stage, wherein relevant methodological reviews were identified in the area of prediction model research, which was followed by identification of relevant methodological studies by members of the steering group, and lastly, additional evidence was identified with the help of applying the Delphi procedure in a wider group. All this evidence produced an initial list of signalling questions to consider for inclusion in PROBAST. In the third stage, a modified Delphi process, by means of web-based surveys, was used to gain structured feedback and agreement on the scope, structure, and content of PROBAST through 7 rounds. The 38-member Delphi group included methodological experts in prediction model research and development of quality assessment tools, experienced systematic reviewers, commissioners, and representatives of reimbursement agencies. The inclusion of various stakeholders ensured fair representation of the views of end users, methodological experts, and decision makers. In the fourth stage, the then-current version of PROBAST was piloted at multiple workshops at consecutive Cochrane Colloquia as well as numerous workshops with MSc and PhD students. The feedback received was used to further refine the content and structure of PROBAST, wording of the signalling questions, and content of the guidance documents. (7,8)

PROBAST consists of 4 steps, viz. 1) specifying the systematic review question, 2) classifying the type of prediction model,  3) assessing ROB and applicability and 4) the overall judgement. PROBAST is the first comprehensively developed tool designed explicitly to assess the quality of prediction model studies for development, validation, or updating of both diagnostic and prognostic models, notwithstanding the medical domain, type of outcome, predictors, or statistical technique used. (7,8) PROBAST was introduced earlier this month in two parts; the first publication by Wolf et al.(8) highlights the development and scope of PROBAST, while the second publication by Moons et al. (8) explicitly describes the applications of PROBAST and how to judge ROB and applicability.

Organizations that support decision making (such as the National Institute for Health and Care Excellence and the Institute for Quality and Efficiency in Health Care); researchers and/or clinicians interested in evidence-based medicine or involved in guideline development; and journal editors, manuscript reviewers are the potential users for PROBAST. (8)

References 

  1. Bouwmeester W, Zuithoff NP, Mallett S, et al. Reporting and methods in clinical prediction research: a systematic review. PLoS Med 2012; 9:1-12.
  2. Steyerberg EW, Moons KG, van der Windt DA, et al; PROGRESS Group. Prognosis Research Strategy (PROGRESS) 3: prognostic model research. PLoS Med 2013; 10:e1001381.
  3. Collins GS, Mallett S, Omar O, et al. Developing risk prediction models for type 2 diabetes: a systematic review of methodology and reporting. BMC Med 2011; 9:103.
  4. Altman DG. Prognostic models: a methodological framework and review of models for breast cancer. Cancer Invest 2009; 27:235-43.
  5. Counsell C, Dennis M. Systematic review of prognostic models in patients with acute stroke. Cerebrovasc Dis 2001; 12:159-70.
  6. Damen JA, Hooft L, Schuit E, et al. Prediction models for cardiovascular disease risk in the general population: systematic review. BMJ 2016; 353:i2416.
  7. Moons KGM, Wolf RF, Riley RD, et al. PROBAST: A tool to assess risk of bias and applicability of prediction model studies: Explanation and elaboration. Ann Intern Med 2019; 170:W1-W33.
  8. Wolf RF, Moons KGM, Riley RD, et al; for the PROBAST Group. PROBAST: A Tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med 2019; 170:51-58.
  9. Hayden JA, van der Windt DA, Cartwright JL, et al. Assessing bias in studies of prognostic factors. Ann Intern Med 2013; 158:280-6.
  10. Higgins JPT, Savovic´ J, Page MJ, et al. ROB2 Development Group. A revised tool for assessing risk of bias in randomized trials. In: Chandler J, McKenzie J, Boutron I, Welch V, eds. Cochrane Methods. London: Cochrane; 2018:1-69.
  11. Sterne JA, Herna´n MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016; 355:i4919.
  12. PROBAST. Available at: http://www.probast.org/ABOUT
  13. Moher D, Schulz KF, Simera I, et al. Guidance for developers of health research reporting guidelines. PLoS Med 2010; 7:e1000217.

Written by: Ms. Tanvi Laghate

Related Posts

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.