A conceptualized model may be useful for understanding risk stratification of primary prevention implantable cardioverter defibrillators considering the competing risks of appropriate implantable cardioverter defibrillator shock versus mortality.
Methods and Results—
In a prospective, multicenter, population-based cohort with left ventricular ejection fraction ≤35% referred for primary prevention implantable cardioverter defibrillator, we developed dual risk stratification models to determine the competing risks of appropriate defibrillator shock versus mortality using a Fine-Gray subdistribution hazard model. Among 7020 patients referred, 3445 underwent defibrillator implant (79.7% men, median, 66 years [25th, 75th: 58–73]). During 5918 person-years of follow-up, appropriate shock occurred in 204 patients (3.6 shocks/100 person-years) and 292 died (4.9 deaths/100 person-years). Competing risk predictors of appropriate shock included nonsustained ventricular tachycardia, atrial fibrillation, serum creatinine concentration, digoxin or amiodarone use, and QRS duration near 130-ms peak. One-year cumulative incidence of appropriate shock was 0.9% in the lowest risk category, and 1.7%, 2.5%, 4.9%, and 9.3% in low, intermediate, high, and highest risk groups, respectively. Hazard ratios for appropriate shock ranged from 4.04 to 7.79 in the highest 3 deciles (all
P≤0.001 versus lowest risk). Cumulative incidence of 1-year death was 0.6%, 1.9%, 3.3%, 6.2%, and 17.7% in lowest, low, intermediate, high, and highest risk groups, respectively. Mortality hazard ratios ranged from 11.48 to 36.22 in the highest 3 deciles (all P<0.001 versus lowest risk). Conclusions—
Simultaneous estimation of risks of appropriate shock and mortality can be performed using clinical variables, providing a potential framework for identification of patients who are unlikely to benefit from prophylactic implantable cardioverter defibrillator.