Home
Scholarly Works
Low-Light Image Enhancement via Generative...
Conference

Low-Light Image Enhancement via Generative Perceptual Priors

Abstract

Although significant progress has been made in enhancing visibility, retrieving texture details, and mitigating noise in Low-Light (LL) images, the challenge persists in applying current Low-Light Image Enhancement (LLIE) methods to real-world scenarios, primarily due to the diverse illumination conditions encountered. Furthermore, the quest for generating enhancements that are visually realistic and attractive remains an underexplored realm. In response to these challenges, we present a novel LLIE framework with the guidance of Generative Perceptual Priors (GPP-LLIE) derived from vision-language models (VLMs). Specifically, we first propose a pipeline that guides VLMs to assess multiple visual attributes of the LL image and quantify the assessment to output the global and local perceptual priors. Subsequently, to incorporate these generative perceptual priors to benefit LLIE, we introduce a transformer-based backbone in the diffusion process, and develop a new layer normalization (GPP-LN) and an attention mechanism (LPP-Attn) guided by global and local perceptual priors. Extensive experiments demonstrate that our model outperforms current SOTA methods on paired LL datasets and exhibits superior generalization on real-world data.

Authors

Zhou H; Dong W; Liu X; Zhang Y; Zhai G; Chen J

Volume

39

Pagination

pp. 10752-10760

Publisher

Association for the Advancement of Artificial Intelligence (AAAI)

Publication Date

April 11, 2025

DOI

10.1609/aaai.v39i10.33168

Conference proceedings

Proceedings of the AAAI Conference on Artificial Intelligence

Issue

10

ISSN

2159-5399
View published work (Non-McMaster Users)

Contact the Experts team