Missing data are a big concern in any research project and are often unavoidable in spite of investigators’ best efforts. Missing outcomes have two effects: reduced precision and power, and bias. Also, the loss of precision is inevitable, except the possible use of the available data; e.g. to be sure not to exclude from the analysis individuals who dropped out before the end of the study but who nevertheless reported intermediate values of the outcome. However, the statistician can aim to reduce bias through suitable choice of an analysis. (1)
Randomized controlled trials (RCTs) typically have missing outcome data for some participants. Patient-reported outcomes (PROs) such as health-related quality of life (QoL) are mostly prone to missing data due to patients failing to complete follow-up questionnaires. Assumptions are often applied in case of statistical analyses for missing data to explicitly specify the values of the missing data: e.g. missing values being failures, as in smoking cessation trials. Other assumptions are inherent statements about the similarity of distributions, such as ‘last observation carried forward’. (1,2)
In the primary trial analysis, an approach is often proposed which is valid under plausible assumptions for studies with the missing data. Instead of assuming that the data are ‘missing completely at random’ (MCAR), the primary analysis should suppose them to be ‘missing at random’ (MAR), i.e. the probability of missing data does not depend on the patient’s outcome, after conditioning on the observed variables (e.g. the patients’ baseline characteristics). However, the MAR assumption is unlikely to be used in many settings; for example, patients in relatively poor health are less likely to complete the requisite questionnaires, thus making the outcome data ‘missing not at random’ (MNAR). (2)
The US National Research Council (NRC) report on missing data in clinical trials advocates sensitivity analyses for recognizing the data to be MNAR, in accordance with general methodological guidance for dealing with missing data and previous specific advice for intention-to-treat (ITT) analysis in RCTs. (3) On the other hand, systematic reviews show that in practice RCTs do not handle missing data appropriately. (4) Sensitivity analysis can be approached with either statistical modeling of parameters that represent outcome differences between individuals with complete versus missing data and/or exploring varying inferences with respect to the ‘sensitivity parameters’ assuming specific values. (5) The final output, i.e. results and conclusion, can then be compared over a reasonable range of values, possibly including a ‘tipping-point’ when results change. However, this approach does have a set of shortcomings. (2)
An alternative is to allow experts to quantify their views. This is not only more intuitive and attractive for them, but it also considers a fully Bayesian approach and properly captures and reflects expert opinion (and associated uncertainty) about the missing data in the subsequent estimate of the treatment effect and its credible interval. This is particularly useful for those needing a quantitative summary of the trial, such as systematic reviewers, decision makers and health providers, as it provides a quantitative synopsis of interpretation of results by experts, given the missing data. When reviewing the study, experts will implicitly ‘fill in’ the gaps created by the missing data to arrive at their conclusions. The proposed elicitation approach, together with a Bayesian analysis, allows the study to comprehensibly quantify the impact of incorporating expert knowledge through to the estimates of treatment effectiveness.(1,2)
Sensitivity analyses using Bayesian approach require practical tools for easier expert elicitation, and recent research focuses on elicitation approaches within group meetings. Group level elicitation has benefits for training and clarification and facilitates behavioral aggregation for achieving consensus. (6) However, because of the ‘feedback’ loop, these approaches are costly in both money and time. Thus, in many RCTs, it may not be viable to elicit opinion from a sufficient number and range of experts. Easier uptake of recommended approaches for sensitivity analysis for missing data within RCTs requires more accessible, practical tools for eliciting and synthesizing expert opinion to be developed and exemplified. (2)
Using open source software like face-to-face or online ones to elicit beliefs from reasonably large number of experts without imposing an undue burden is one option that has been recently suggested. With this tool, the elicited views can be converted into informative priors for the sensitivity parameters in a pattern-mixture model which will allow for correlation in the elicited values across the trial arms. After this, the trial data can be re-evaluated under different MNAR assumptions to explore the robustness of the results. These methods, along with the expected level of loss to follow-up, could provide an improved estimate of the probable impact of missing data on the trial’s results. Therefore, this approach can significantly help improve trial design, so that the study results are more robust to anticipated levels of missing data.
- Jackson D, White IR, Leese M. How much can we learn about missing data?: an exploration of a clinical trial in psychiatry. Journal of the Royal Statistical Society Series A, (Statistics in Society). 2010; 173(3):593-612.
- Mason AJ, Gomes M, Grieve R, et al. Development of a practical approach to expert elicitation for randomized controlled trials with missing health outcomes: Application to the IMPROVE trial. Clinical Trials 2017; 14(4):357–367.
- Little RJ, D’Agostino R, Cohen ML, et al. The prevention and treatment of missing data in clinical trials. N Engl J Med 2012; 367(14):1355–1360.
- Bell ML, Fiero M, Horton NJ, et al. Handling missing data in RCTs; a review of the top medical journals. BMC Med Res Methodol 2014; 14:118.
- Little RJA. A class of pattern-mixture models for normal incomplete data. Biometrika 1994; 81(3):471–483.
- O’Hagan A, Buck CE, Daneshkhah A, et al. Uncertain judgments: eliciting experts’ probabilities. 1st ed. Hoboken, NJ: John Wiley & Sons, 2006.