Factors underlying performance on written tests of knowledge
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
The study compares two popular forms of written tests; the multiple choice test (MCQ) and the Modified Essay Question (MEQ). Two factors were varied in the experiment: the format of the questions (multiple choice, directed free response, or open-ended free response) and the context of the questions (in a patient problem or in random sequence). Six problems were developed in each version, and administered to a total of 36 medical students at three educational levels using a Latin-square design. The results showed a significant effect of each factor in the design, amounting to a difference of 8.7% between MCQ and directed free response, 4.2% between directed and open-ended free response and 4.3% between problem and random context. However, the correlation of scores based on content across the formats approached unity after correction for attenuation. A process score, based on the style and presentation in the undirected format, correlated more strongly with the free-response questions. The results suggest that, although the MCQ and MEQ may assess different skills, there is a very strong relationship between content scores derived from the two formats. The free response formats may present the opportunity for assessment of other factors related to presentation if scoring procedures are modified. Finally, the effect of randomizing questions is a deterioration of performance when compared to placing questions in the problem context.