abstract
- OBJECTIVE: To assess the consistency of an index of the scientific quality of research overviews. DESIGN: Agreement was measured among nine judges, each of whom assessed the scientific quality of 36 published review articles. ITEM SELECTION: An iterative process was used to select ten criteria relative to five key tasks entailed in conducting a research overview. SAMPLE: The review articles were drawn from three sampling frames: articles highly rated by criteria external to the study; meta-analyses; and a broad spectrum of medical journals. JUDGES: Three categories of judges were used: research assistants; clinicians with research training; and experts in research methodology; with three judges in each category. RESULTS: The level of agreement within the three groups of judges was similar for their overall assessment of scientific quality and for six of the nine other items. With four exceptions, agreement among judges within each group and across groups, as measured by the intraclass correlation coefficient (ICC), was greater than 0.5, and 60% (24/40) of the ICCs were greater than 0.7. CONCLUSIONS: It was possible to achieve reasonable to excellent agreement for all of the items in the index, including the overall assessment of scientific quality. The implications of these results for practising clinicians and the peer review system are discussed.