Background: Frailty is increasingly common in community dwelling older adults and increases their risk of adverse outcomes. Risk assessment is implicit in the Aged Care Assessment Teams process, but few studies have considered the factors that influence the assessor’s decision making or explored the factors that may contribute to their interpretation of risk. Objective: to examine the inter-rater reliability of the Community Assessment of Risk Instrument (CARI), which is a new risk assessment instrument. Design: A cohort study was used. Setting and participants: A sample of 50 community dwelling older adults underwent comprehensive geriatric assessment by two raters: a geriatrician and a registered nurse. Procedure and measurements: Each participant was scored for risk by the two raters using the CARI. This instrument ranks risk of three adverse outcomes, namely i) institutionalisation, ii) hospitalisation and iii) death within the next year from a score of 1, which is minimal risk to 5, which is extreme risk. Inter-rater reliability was assessed with Gamma, Spearman correlation and Kappa statistics. Internal consistency was assessed with Cronbach’s alpha. Results: There were 30 female (mean age 82.23 years) and 20 male (mean age 81.75 years) participants. Items within domains showed good-excellent agreement. The gamma statistic was >0.77 on 6/7 Mental State items, 14/15 items in the Activities of Daily Living domain. In the Medical domain, 6/9 items had Gamma scores >0.80. The global domain scores correlated well, 0.88, 0.72 and 0.87. Caregiver network scores were 0.71, 0.73 and 0.51 for the three domains. Inter-rater reliability scores for global risk scales were 0.86 (institutionalisation) and 0.78 (death). The gamma statistic for hospitalisation was 0.29, indicative of lower inter-rater reliability. Cronbach’s alpha was 0.86 and 0.83 for the Activities of Daily Living domain, 0.51 and 0.42 for the Mental state domain and 0.23 and 0.10 for the Medical state domain. Conclusions: Overall, the instrument shows good inter-rater reliability. Poor correlations on some items relate to poor communication of clinical data and variable interpretation based on professional background. Lack of internal consistency in the medical condition domain confirms the discrete nature of these variables.