abstract
- This third article in a seven part series presents the Core GRADE (Grading of Recommendations Assessment, Development and Evaluation) approach to deciding whether to rate down certainty of evidence due to inconsistency—that is, unexplained variability in results across studies. For binary outcomes in which relative effects are consistent across baseline risks while absolute effects are not, Core Grade users assess consistency in relative effects. For continuous outcomes, they assess consistency in the absolute effects. When planning for the possibility of inconsistent results across studies, systematic review authors using Core GRADE construct a priori hypotheses regarding population or intervention characteristics that may explain inconsistency. They then judge the magnitude of inconsistency by considering the extent to which point estimates differ and the degree to which confidence intervals overlap. Before making a decision on rating down, Core GRADE users will evaluate where individual study estimates lie in relation to the threshold of the certainty rating (minimal important difference or the null). Finally, they will test their subgroup hypothesis and if an effect proves credible will provide separate evidence summaries and rate certainty of evidence separately for each subgroup. When they find no credible subgroup effect, they will provide a single evidence summary, rating down for inconsistency if necessary.