Home
Scholarly Works
Keep It Light! Simplifying Image Clustering Via...
Preprint

Keep It Light! Simplifying Image Clustering Via Text-Free Adapters

Abstract

In the era of pre-trained models, effective classification can often be achieved using simple linear probing or lightweight readout layers. In contrast, many competitive clustering pipelines have a multi-modal design, leveraging large language models (LLMs) or other text encoders, and text-image pairs, which are often unavailable in real-world downstream applications. Additionally, such frameworks are generally complicated to train and require substantial computational resources, making widespread adoption challenging. In this work, we show that in deep clustering, competitive performance with more complex state-of-the-art methods can be achieved using a text-free and highly simplified training pipeline. In particular, our approach, Simple Clustering via Pre-trained models (SCP), trains only a small cluster head while leveraging pre-trained vision model feature representations and positive data pairs. Experiments on benchmark datasets, including CIFAR-10, CIFAR-20, CIFAR-100, STL-10, ImageNet-10, and ImageNet-Dogs, demonstrate that SCP achieves highly competitive performance. Furthermore, we provide a theoretical result explaining why, at least under ideal conditions, additional text-based embeddings may not be necessary to achieve strong clustering performance in vision.

Authors

Li Y; Borde HSEDOR; Kratsios A; McNicholas PD

Publication date

December 19, 2025

DOI

10.48550/arxiv.2502.04226

Preprint server

arXiv
View published work (Non-McMaster Users)

Contact the Experts team