Home
Scholarly Works
Preserving Details in Darkness: A VQ-VAE Based...
Conference

Preserving Details in Darkness: A VQ-VAE Based Approach with SSIM Loss for Low-Light Image Enhancement

Abstract

Low-light image enhancement represents a vital but difficult task in the field of computer vision, with numerous real-world applications. In this paper, we propose a novel method leveraging the Vector Quantized Variational AutoEncoder (VQVAE) framework to address this challenge effectively. VQ-VAE is chosen as the foundational framework due to its ability to produce compact and expressive discrete latent representations, which makes it ideal for preserving critical details and textures in images, especially in low-light conditions. The architecture of our models comprises these main components: an encoder and a normal-light image generator, each playing a vital role in facilitating detailed and accurate reconstructions of input data. In this work, we also propose a comprehensive loss function tailored specifically for low-light image enhancement, incorporating reconstruction loss, vector quantization loss, and Structural Similarity Index Measure (SSIM) loss. The inclusion of SSIM loss ensures the preservation and enhancement of essential visual features, aligning with human perceptual requirements. Experimental results demonstrate the efficacy of our proposed method in significantly improving low-light image enhancement, surpassing traditional VQ-VAE approaches. Two public datasets, LOL and LOL-v2, are used to evaluate our work. The PSNR and SSIM for these two datasets in our work are 21.46, 0.815, and 17.18, 0.730, respectively.

Authors

Koohestani F; Babak ZNS; Karimi N; Samavi S

Volume

00

Pagination

pp. 342-348

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

May 31, 2024

DOI

10.1109/aiiot61789.2024.10578944

Name of conference

2024 IEEE World AI IoT Congress (AIIoT)
View published work (Non-McMaster Users)

Contact the Experts team