Background: Quality assessment of included studies is a crucial step in any systematic review (SR). Review and synthesis of prediction-modelling studies is an evolving area and a tool facilitating quality assessment for prognostic and diagnostic prediction-modelling studies is needed.
Objectives: To introduce PROBAST, a tool for assessing the risk of bias and applicability of prediction-modelling studies in a SR.
Methods: A Delphi process, involving 42 experts in the field of prediction research, was used until agreement on the content of the final tool was reached. Existing initiatives in the field of prediction research such as the REMARK and TRIPOD reporting guidelines formed part of the evidence base for the tool development. The scope of PROBAST was determined with consideration of existing tools, such as QUIPS and QUADAS 2.
Results: After six rounds of the Delphi procedure, a final tool was developed which utilises a domain-based structure supported by signalling questions similar to QUADAS 2. PROBAST assesses the risk of bias and applicability of prediction-modelling studies. Risk of bias refers to any flaw or shortcoming in the design, conduct or analysis of a primary study that is likely to distort the predictive performance of a model. The predictive performance is typically evaluated using calibration, discrimination and sometimes classification measures. Assessment of applicability examines whether the prediction-model development or validation study matches the systematic review question in terms of the target population, predictors or outcomes of interest.
PROBAST comprises four domains (Participant selection; Predictors; Outcome; Analysis) and 23 signalling questions grouped within these domains.
Conclusions: PROBAST can be used to assess the quality of prediction-modelling studies included in a SR. The presentation will give an overview of the development process and introduce the final tool.