Best Evidence in Emergency Medicine (BEEM) Rater Scores Correlate With Publications’ Future Citations Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • AbstractBackgroundThe “BEEM” (best evidence in emergency medicine) rater scale was created for emergency physicians (EPs) to evaluate the physician‐derived clinical relevance score of recently published, emergency medicine (EM)‐related studies. BEEM therefore is designed to help make EPs aware of studies most likely to confirm or change current clinical practice.ObjectivesThe objective was to validate the BEEM rater score as a predictor of literature citation, using a bibliometric construct of clinical relevance to EM based on author‐, document‐, and journal‐level measures (first and last author h‐indices, number of authors including corporate and group authors, citations from date of publication to 2011, and journal impact factor scores) and study characteristics (design, category, and sample size).MethodsEach month from 2007 through 2012, approximately 200 EPs from around the world voluntarily reviewed the titles and conclusions of recently published EM‐related studies identified by BEEM faculty via the McMaster Health Information Research Unit. Using the BEEM rater scale, a reliable seven‐item instrument that evaluates the clinical relevance of studies, raters independently assigned BEEM scores to approximately 10 to 20 articles each month. Two investigators independently abstracted the bibliometric indices for these articles. A citation rate for each article was calculated by dividing the Thomson Reuters Web of Science (WoS) total citation count by the number of years in publication. BEEM rater scores were correlated with the citation rate using Spearman's rho. The performance of the BEEM rater score was assessed for each article using negative binomial regression with composite citation count as the criterion standard, while controlling for other independent bibliometric variables in three models.ResultsThe BEEM raters evaluated 605 articles with a mean (±SD) BEEM score of 3.84 (±0.7) and a median BEEM score of 3.85 (interquartile range = 3.38 to 4.30). Articles were primarily therapeutic (59%) and diagnostic (27%), with various designs, including 37% systematic reviews, 32% randomized controlled trials (RCTs), and 30% observational designs. The citation rate and BEEM rater score correlated positively (0.144), while the BEEM rater score and the Journal Citation Report (JCR) impact factor score were minimally correlated (0.053). In the first model, the BEEM rater score significantly predicted WoS citation rate (p < 0.0001) with an odds ratio (OR) of 1.24 (95% confidence interval [CI] = 1.106 to 1.402). In subsequent models adjusting for the JCR impact factor score, the h‐indices of the first and last authors, number of authors, and study design, the BEEM rater score was not significant (p = 0.08).ConclusionsTo the best of our knowledge, the BEEM rater score is the only known measure of clinical relevance. It has a high interrater reliability and face validity and correlates with future citations. Future research should assess this instrument against alternative constructs of clinical relevance.

authors

  • Carpenter, Christopher R
  • Sarli, Cathy C
  • Fowler, Susan A
  • Kulasegaram, Kulamakan
  • Vallera, Teresa
  • Lapaine, Pierre
  • Schalet, Grant
  • Worster, Andrew

publication date

  • October 2013