Home
Scholarly Works
Towards Scale-Aware Low-Light Enhancement Via...
Conference

Towards Scale-Aware Low-Light Enhancement Via Structure-Guided Transformer Design

Abstract

Current Low-light Image Enhancement (LLIE) techniques predominantly rely on either direct Low-Light (LL) to Normal-Light (NL) mappings or guidance from semantic features or illumination maps. Nonetheless, the intrinsic ill-posedness of LLIE and the difficulty in retrieving robust semantics from heavily corrupted images hinder their effectiveness in extremely low-light environments. To tackle this challenge, we present SG-LLIE, a new multi-scale CNN-Transformer hybrid framework guided by structure priors. Different from employing pre-trained models for the extraction of semantics or illumination maps, we choose to extract robust structure priors based on illuminationinvariant edge detectors. Moreover, we develop a CNNTransformer Hybrid Structure-Guided Feature Extractor (HSGFE) module at each scale with in the UNet encoder-decoder architecture. Besides the CNN blocks which excels in multi-scale feature extraction and fusion, we introduce a Structure-Guided Transformer Block (SGTB) in each HSGFE that incorporates structural priors to modulate the enhancement process. Extensive experiments show that our method achieves state-of-the-art performance on several LLIE benchmarks in both quantitative metrics and visual quality. Our solution ranks second in the NTIRE 2025 Low-Light Enhancement Challenge. Code is released at https://github.com/minyan8/imagine.

Authors

Dong W; Min Y; Zhou H; Chen J

Volume

00

Pagination

pp. 1454-1461

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

June 12, 2025

DOI

10.1109/cvprw67362.2025.00135

Name of conference

2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
View published work (Non-McMaster Users)

Contact the Experts team