AdaCLIP: Adapting CLIP with Hybrid Learnable Prompts for Zero-Shot Anomaly Detection
Abstract
Zero-shot anomaly detection (ZSAD) targets the identification of anomalies
within images from arbitrary novel categories. This study introduces AdaCLIP
for the ZSAD task, leveraging a pre-trained vision-language model (VLM), CLIP.
AdaCLIP incorporates learnable prompts into CLIP and optimizes them through
training on auxiliary annotated anomaly detection data. Two types of learnable
prompts are proposed: static and dynamic. Static prompts are shared across all
images, serving to preliminarily adapt CLIP for ZSAD. In contrast, dynamic
prompts are generated for each test image, providing CLIP with dynamic
adaptation capabilities. The combination of static and dynamic prompts is
referred to as hybrid prompts, and yields enhanced ZSAD performance. Extensive
experiments conducted across 14 real-world anomaly detection datasets from
industrial and medical domains indicate that AdaCLIP outperforms other ZSAD
methods and can generalize better to different categories and even domains.
Finally, our analysis highlights the importance of diverse auxiliary data and
optimized prompts for enhanced generalization capacity. Code is available at
https://github.com/caoyunkang/AdaCLIP.
Authors
Cao Y; Zhang J; Frittoli L; Cheng Y; Shen W; Boracchi G