How good are clinical MEDLINE searches? A comparative study of clinical end-user and librarian searches
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
The objective of this study was to determine the quality of MEDLINE searches done by physicians, physician trainees, and expert searchers (clinicians and librarians). Its design was an analytic survey with independent replication in a setting of self-service online searching from medical wards, an intensive care unit, a coronary care unit, an emergency room, and an ambulatory clinic in a 300-bed teaching hospital. Participating were all M.D. clinical clerks, house, and attending staff responsible for patients in the above settings. Intervention for all participants consisted of a 2-h small group class and 1-h practice session on MEDLINE searching (GRATEFUL MED) before free access to MEDLINE. Search questions from 104 randomly selected novice searches were given to 1 of 13 clinicians with prior search experience and 1 of 3 librarians to run independent searches (triplicated searches). Measurements and main results from these unique citations of the triplicated searches were sent to expert clinicians to rate for relevance (7-point scale). Recall (number of relevant citations retrieved from an individual search divided by the total number of relevant citations from all searches on the same topic) and precision (proportion of relevant citations retrieved in each search) were calculated. Librarians were significantly better than novices for both. Librarians had equivalent recall to, and better precision than, experienced end-users. Unexpectedly, only 20% of relevant citations were retrieved by more than one search of the set of three, with the conclusion that novice searchers on MEDLINE via GRATEFUL MED after brief training have relatively low recall and precision. Recall improves with experience but precision remains suboptimal. Further research is needed to determine the "learning curve," evaluate training interventions, and explore the non-overlapping retrieval of relevant citations by different searchers.