Home
Scholarly Works
Human-Clinical AI Agent Collaboration
Journal article

Human-Clinical AI Agent Collaboration

Abstract

Balancing automation and accountability is fundamental in any healthcare field, particularly under mandates from the world's first AI act. Yet, the act relies on self-assessment. Here, we draw from a half century of theoretical cognitive neuroscience theories and analyze emerging computer science principles to develop an actionable blueprint to advance beyond self-assessment protocols for responsible Human-Clinical AI Collaboration. Our framework proactively identifies and mitigates risk through four key contributions: (1) interactive healthcare simulations populated by Clinical AI Agents as experimental testbeds to systematically evaluate human-AI collaboration without exposing patients to harm; (2) cognitive-state aware AI that adapts its behaviour based on measured physiological signals indicating cognitive load; and (3) critical safety mechanisms that enable Clinical AI Agents to disengage when detecting insufficient clinician engagement, preventing dangerous over-reliance; (4) emphasizing interpretable models for high-risk decisions and physiologically-adaptive explanations. These innovations address the fundamental mismatch between the dynamic nature of human cognition and the static interaction patterns of current Clinical AI systems, anticipating and mitigating both dangerous over-reliance and disengagement from algorithmic insights.

Authors

Kadem M; Al-Khazraji B

Journal

Proceedings of the AAAI Symposium Series, Vol. 6, No. 1, pp. 255–264

Publisher

Association for the Advancement of Artificial Intelligence (AAAI)

Publication Date

August 1, 2025

DOI

10.1609/aaaiss.v6i1.36060

ISSN

2994-4317
View published work (Non-McMaster Users)

Contact the Experts team