Background: Trials within meta-analyses are often affected by varying amounts of internal bias caused by methodological flaws. By using external information on the likely effects of particular biases, trial results can be adjusted. Two proposed methods adjust trial results based on: (1) empirical evidence from published meta-analyses; or (2) expert opinion elicited specifically for each trial in the meta-analysis.
Objectives: Our aim is to investigate agreement between empirical data-based and opinion-based approaches to predicting the bias associated with flaws in each of four trial characteristics: sequence generation, allocation concealment, blinding and incomplete outcome data.
Methods: For each bias component in turn, we sampled 30 meta-analyses from a large collection of meta-analyses, the Risk of Bias in Evidence Synthesis (ROBES) study. A bias model was fitted to all meta-analyses within ROBES to obtain fitted values for the trial-specific biases within each sampled meta-analysis. We selected the pair of trials with the highest and lowest fitted bias values within each meta-analysis, and then asked assessors which trial within each pair was judged to be more biased on the basis of detailed trial design summaries.
Results: Assessors chose trial pairs to be equally biased in 68% of trial rankings. Of the assessor opinions that judged one trial as more biased, the proportion that agreed with the ranking based on data-based fitted biases was highest for allocation concealment (79%) and blinding (79%) and lower for sequence generation (59%) and incomplete outcome data (56%).
Conclusions: We expect that incorporating opinion on bias may not reduce uncertainty much, compared with using data-based evidence alone, given that experts chose 'equally biased' for the majority of trial rankings. However, combining data-based evidence on bias with opinion would be useful when data-based evidence is sparse.