Home
Scholarly Works
Multi-Modality Deep Restoration of Extremely...
Journal article

Multi-Modality Deep Restoration of Extremely Compressed Face Videos

Abstract

Arguably the most common and salient object in daily video communications is the talking head, as encountered in social media, virtual classrooms, teleconferences, news broadcasting, talk shows, etc. When communication bandwidth is limited by network congestions or cost effectiveness, compression artifacts in talking head videos are inevitable. The resulting video quality degradation is highly visible and objectionable due to high acuity of human visual system to faces. To solve this problem, we develop a multi-modality deep convolutional neural network method for restoring face videos that are aggressively compressed. The main innovation is a new DCNN architecture that incorporates known priors of multiple modalities: the video-synchronized speech signal and semantic elements of the compression code stream, including motion vectors, code partition map and quantization parameters. These priors strongly correlate with the latent video and hence they are able to enhance the capability of deep learning to remove compression artifacts. Ample empirical evidences are presented to validate the superior performance of the proposed DCNN method on face videos over the existing state-of-the-art methods.

Authors

Zhang X; Wu X

Journal

IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 2, pp. 2024–2037

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

February 1, 2023

DOI

10.1109/tpami.2022.3157388

ISSN

0162-8828

Contact the Experts team