Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • OBJECTIVES: To identify factors that differentiate between effective and ineffective computerised clinical decision support systems in terms of improvements in the process of care or in patient outcomes. DESIGN: Meta-regression analysis of randomised controlled trials. DATA SOURCES: A database of features and effects of these support systems derived from 162 randomised controlled trials identified in a recent systematic review. Trialists were contacted to confirm the accuracy of data and to help prioritise features for testing. MAIN OUTCOME MEASURES: "Effective" systems were defined as those systems that improved primary (or 50% of secondary) reported outcomes of process of care or patient health. Simple and multiple logistic regression models were used to test characteristics for association with system effectiveness with several sensitivity analyses. RESULTS: Systems that presented advice in electronic charting or order entry system interfaces were less likely to be effective (odds ratio 0.37, 95% confidence interval 0.17 to 0.80). Systems more likely to succeed provided advice for patients in addition to practitioners (2.77, 1.07 to 7.17), required practitioners to supply a reason for over-riding advice (11.23, 1.98 to 63.72), or were evaluated by their developers (4.35, 1.66 to 11.44). These findings were robust across different statistical methods, in internal validation, and after adjustment for other potentially important factors. CONCLUSIONS: We identified several factors that could partially explain why some systems succeed and others fail. Presenting decision support within electronic charting or order entry systems are associated with failure compared with other ways of delivering advice. Odds of success were greater for systems that required practitioners to provide reasons when over-riding advice than for systems that did not. Odds of success were also better for systems that provided advice concurrently to patients and practitioners. Finally, most systems were evaluated by their own developers and such evaluations were more likely to show benefit than those conducted by a third party.

publication date

  • February 14, 2013