The reporting quality of diagnostic accuracy studies in the urology literature

ID: 

1113

Session: 

Poster session 1 Wednesday: Evidence production and synthesis

Date: 

Wednesday 13 September 2017 - 12:30 to 14:00

Location: 

All authors in correct order:

Gandhi S1, Smith D1
1 Minneapolis VAMC and University of Minnesota, USA
Presenting author and contact person

Presenting author:

Philipp Dahm

Contact person:

Abstract text
Background: Transparent study reporting is a critical aspect of high-quality primary clinical research and subsequent evidence synthesis. For studies of diagnostic accuracy, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement that was initially developed in 2003 and updated in 2015, describes minimal reporting requirements for such studies.

Objectives: To formally assess the reporting quality of diagnostic accuracy studies in the urology literature.

Methods: A PubMed search using the Clinical Query function supplemented by hand-searching was performed of 4 major urology journals (JU, Eur Urol, BJU Intern and Urology) for studies published from January through December 2015 relating to questions of diagnostic accuracy. Two independent reviewers performed study selection using Covidence and performed data abstraction in duplicate using a piloted form based on the 30 individual STARD 2015 criteria. We performed descriptive statistical analysis using SPSS version 24.

Results: The search yielded 818 studies of which 67 were reviewed in full-text with 63 studies ultimately meeting inclusion criteria. The median number of STARD criteria met was 19.5 (interquartile range: 17.0 to 20.5). Fifteen of 30 criteria (50%) such as reporting of clinical background (#3; 100%) and study eligibility criteria (#6; 95.2%) were reported by at least 80% of studies. Meanwhile, reporting was poor for 6 of 30 criteria (20%), namely sample-size considerations (#18; 4.8%); study registration (#28; 4.8%); protocol access (#29; 6.3%); handling of missing data (#16; 7.9%) and indeterminate values (#15; 15.9%); as well as adverse-event reporting (#25; 15.9%).

Conclusions: Reporting quality of STARD criteria in diagnostic accuracy studies published in the urology literature varies widely by criteria with poor reporting for 1 in 5. There is an important need to raise greater awareness for the importance of transparent reporting of diagnostic accuracy studies, in particular for these select criteria.