Normative comparison standards for measures of cognition in the
Canadian Longitudinal Study on Aging (CLSA): Does applying sample weights
make a difference?
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
Large-scale studies present the opportunity to create normative comparison standards relevant to populations. Sampling weights applied to the sample data facilitate extrapolation to the population of origin, but normative scores are often developed without the use of these sampling weights because the values derived from large samples are presumed to be precise estimates of the population parameter. The present article examines whether applying sample weights in the context of deriving normative comparison standards for measures of cognition would affect the distributions of regression-based normative data when using data from a large population-based study. To address these questions, we examined 3 cognitive measures from the Canadian Longitudinal Study on Aging tracking cohort (N = 14,110, Age 45-84 years at recruitment): Rey Auditory Verbal Learning Test - Immediate Recall, Animal Fluency, and the Mental Alternation Test. The use of sampling weights resulted in similar model parameter estimates to unweighted regression analyses and similar cumulative frequency distributions to the unweighted analyses. We randomly sampled progressively smaller subsets from the full database to test the hypothesis that sampling weights would help maintain the estimates from the full sample, but discovered that the weighted and unweighted estimates were similar and were less precise with smaller samples. These findings suggest that although use of sampling weights can help mitigate biases in data from sampling procedures, the application of weights to adjust for sampling biases do not appreciably impact the normative data, which lends support to the current practice in creation of normative data. (PsycINFO Database Record (c) 2019 APA, all rights reserved).