Background: Non-randomised study designs, especially controlled before-after (CBA) studies using aggregate level data, are increasingly used to evaluate policies and programmes in the fields of healthcare, public health, social welfare and education. Most researchers, however, use suboptimal or even inappropriate analysis methods, which may lead to distorted conclusions about the intervention effect. It is important to understand the implications of different analysis methods for the findings of individual studies as well as for the conclusions of systematic reviews.
Objectives: We assessed the influence that different analysis methods commonly applied in CBA studies have on the size of the effect estimate and the associated precision.
Methods: We re-analysed a previously conducted CBA study by Paudel et al. (2017) 'Impact of the Community-Based Newborn Care Package in Nepal: a quasi-experimental evaluation', examining impacts of a government programme on a range of behaviours influencing neonatal health in 10 intervention and 10 propensity score-matched control districts. We performed the following pre-defined analyses:
- comparison of post-intervention means between intervention and control groups;
- comparison of pre- and post-intervention means of the intervention group only;
- comparison of pre-post mean differences in intervention and control groups; and,
- difference-in-differences analysis.
We also assessed the effect of adjusting for clustering by district or not.
Results: The different analysis methods yielded differential impacts of the Community-Based Newborn Care Package, with insufficient use of the data as well as unit of analysis errors leading to imprecision of the effect estimates.
Conclusions: The choice of analysis can influence the estimated direction of intervention effect, effect size and statistical significance. Commonly applied methods for analysing CBA studies can lead to misleading conclusions if the analysis fails to fully exploit the collected data, or to take clustering effects into account. It is important for systematic reviewers to re-analyse incorrectly analysed CBA studies.