Do peer reviewers comment on reporting items as instructed by the journal? A secondary analysis of two randomized trials.
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
OBJECTIVES: Two studies randomizing manuscripts submitted to biomedical journals have previously shown that reminding peer reviewers about key reporting items did not improve the reporting quality in published articles. Within this secondary analysis of peer reviewer reports we aimed to assess at what stage the intervention failed. STUDY DESIGN AND SETTING: We exploratively analyzed peer reviewer reports from two published randomized controlled trials (RCTs) conducted at biomedical journals. The first RCT (CONSORT-PR) assessed adherence to the Consolidated Standards of Reporting Trials (CONSORT) guideline in manuscripts presenting primary RCT results. The second RCT (SPIRIT-PR) included manuscripts presenting RCT protocols and assessed adherence to the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guideline. In both RCTs the control group consisted of peer reviewers receiving no reminder, whereas all reviewers in the intervention group received a reminder of the 10 most important reporting items. For this secondary analysis, we extracted from peer reviewer reports which of the ten key reporting items were mentioned by reviewers as requiring clarification. The main outcome of this secondary analysis was the difference in the mean proportion of these ten reporting items for which at least one peer reviewer requested clarification. Furthermore, we assessed how this difference changed (i) if only published manuscripts were considered and (ii) when only requested changes that were implemented by authors were considered. RESULTS: We assessed peer reviewer reports from 533 manuscripts (n = 265 intervention group; n = 268 control group). Among the manuscripts in the intervention group, 21.1% (95% CI, 18.6%-23.6%) of the ten reporting items were requested for clarification, compared to 13.1% (95% CI, 18.6%-23.6%) in the control group, resulting in a mean difference of 8.0% (95% CI, 4.9%-11.1%). However, this difference diminished to 4.2% when assessing solely accepted and published manuscripts and was even further reduced to 2.6% when accounting for changes actually implemented by authors. CONCLUSION: Reminding peer reviewers to check reporting items increased their focus on reporting guidelines, leading to more reporting-related requests in their reviews. However, the effect was strongly diluted during the peer review process due to rejected articles and requests not implemented by authors. PLAIN LANGUAGE SUMMARY: When new research is submitted to a journal, other experts in the field (peer reviewers) check the research to make sure it's reliable and clear. Among others, one important part of this process is ensuring that researchers follow reporting guidelines about what information should be included in their papers so that the readers can understand how the research was conducted. We wanted to find out if reminding peer reviewers to focus on the key parts of these guidelines (ie, 10 most important items) would help to improve the reporting quality of published research papers. For this purpose, we conducted two studies in which we randomized manuscripts to either an intervention group or a control group. In the intervention group, the peer reviewers from half of the included manuscript received such a reminder (ie, asking them to check whether the 10 most important reporting items are well described in the manuscript), whereas peer reviewers in the control group did not receive a reminder. Within our previously published main results of these studies we saw that the reporting quality of the published articles did not improve with this intervention. To find out why this approach did not work, we looked closer at the individual reports from peer reviewers and checked how often reviewers asked for these important details and whether authors made the necessary changes. We found that reminders did lead to more requests about reporting items from peer reviewers. However, as a high proportion of peer-reviewed articles is rejected during the peer review process and because not all requests for improvements are addressed by authors, this effect was not visible anymore (ie, "diluted") when assessing published research articles.