Home
Scholarly Works
Deep CNN-Based Pre-Encoding Perceptual Quality...
Conference

Deep CNN-Based Pre-Encoding Perceptual Quality Control and Prediction

Abstract

Inevitable utilization of lossy compression methods results in distortion and degrading of video perceptual quality. Predicting perceptual quality before compression is essential to optimize compression parameters, e.g., quantization parameter (QP) and assigning optimized bandwidth. This paper presents three Intra frame (I-frame) perceptual quality prediction methods. The proposed methods work based on deep CNN structure. The proposed methods are integrated with high efficiency video coding (HEVC, H.265) reference codec. The VMAF index has been utilized to measure the perceptual quality of video samples. An end-to-end CNN network performs spatial feature extraction for perceptual quality prediction. The proposed methods are designed based on our experimental observations. The proposed methods are evaluated with 17 video samples, and the results show a reliable, accurate performance of approaches.

Authors

Jenab M; Shirani S

Volume

00

Pagination

pp. 3558-3562

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

October 11, 2023

DOI

10.1109/icip49359.2023.10222417

Name of conference

2023 IEEE International Conference on Image Processing (ICIP)
View published work (Non-McMaster Users)

Contact the Experts team