Evaluating the quality of evidence and classification of intervention approaches for children with difficulties – two sources of unreliability

ID: 

2097

Session: 

Poster session 2 Thursday: Evidence synthesis - methods / improving conduct and reporting

Date: 

Thursday 14 September 2017 - 12:30 to 14:00

Location: 

All authors in correct order:

Miyahara M1, Lagisz M2, Nakagawa S2, Henderson S3
1 University of Otago, New Zealand
2 University of New South Wales, Australia
3 University College, London, UK
Presenting author and contact person

Presenting author:

Motohide Miyahara

Contact person:

Abstract text
Background: When meta-analytic reviewers attempt to determine which intervention approach might be most effective, they often cluster apparently similar methods into subgroups so that an aggregated effect size for each can be computed. On the basis of the highest effect size, they then recommend that type of intervention. Fundamental to this approach is the reliable and valid evaluation of the quality of evidence for each trial, the clarity of description of the main characteristics of a particular form of intervention, and the subsequent reliability and validity of the reviewers’ classification of each trial into different subgroups. To date, little information is available on whether meta-analytic reviewers agree on the ways they evaluate the quality of evidence, classify and name intervention for children with developmental coordination disorder (DCD), and what the sources of discrepancy are.

Objectives: To examine the consistency with which trials’ quality of evidence is evaluated and intervention approaches are classified within 3 recent systematic and meta-analytic reviews of studies purporting to evaluate intervention outcomes on children with DCD and to address the problems encountered.

Methods: Two authors independently assessed the consistency in evaluating the quality of evidence for each trial and classifying intervention approaches for children with DCD in 3 recent comprehensive systematic and meta-analytic reviews. Any discrepancies in their assessment were resolved by discussion.

Results: Both evaluation of the quality of evidence and classification of intervention approaches yielded the same discrepancy rate of 25% across the 3 reviews.

Conclusions: Grouping together approaches to intervention which actually differ on some critical feature may lead to the dissemination of inaccurate information. When future meta-analytic reviewers conduct a subgroup analysis, they should gain in-depth knowledge of evaluating the quality of evidence and each intervention approach, and seek expert opinions widely to ensure the reliability and validity of the quality of evidence and the subgroup classification.