Home
Scholarly Works
Hybrid CNN for Efficient Content-Based Image...
Chapter

Hybrid CNN for Efficient Content-Based Image Retrieval

Abstract

Content-Based Image Retrieval (CBIR) is an important area in multimedia and computer vision, enabling image search and retrieval based on visual content rather than metadata. This paper proposes a Hybrid Deep Feature Fusion (HDFF) framework, a hybrid Convolutional Neural Network (CNN) approach for efficient CBIR, leveraging a combination of ResNet18 and GoogleNet pre-trained models. The proposed HDFF methodology begins with feature extraction, where ResNet18 and GoogleNet separately extract deep features from images in the Corel-1 K dataset. These features are then fused and reduced via Principal Component Analysis (PCA) to minimize dimensionality while preserving the most discriminatory information. The resulting compact feature set is utilized for two tasks: classification and retrieval. A Gaussian Support Vector Machine (SVM) is employed for classification, achieving an outstanding accuracy of 99%. For retrieval, similarity distance metrics are applied, resulting in a precision of 88.8%, a recall of 72%, and a mean average precision (mAP) of 79%. The HDFF framework outperforms state-of-the-art methods such as MIFM, FDLNP, and MRHLFF, demonstrating its robustness and scalability as an advanced CBIR solution. The results confirm that HDFF significantly enhances image retrieval and classification performance, making it a superior alternative to both traditional and modern CBIR techniques.

Authors

Alrahhal M; AlShabi M; Bonny T; Gadsden SA

Book title

Proceedings of IEMTRONICS 2025

Series

Lecture Notes in Electrical Engineering

Volume

1468

Pagination

pp. 547-561

Publisher

Springer Nature

Publication Date

January 1, 2026

DOI

10.1007/978-981-95-0433-6_36
View published work (Non-McMaster Users)

Contact the Experts team