Home
Scholarly Works
Video Frame Interpolation Transformer
Conference

Video Frame Interpolation Transformer

Abstract

Existing methods for video interpolation heavily rely on deep convolution neural networks, and thus suffer from their intrinsic limitations, such as content-agnostic kernel weights and restricted receptive field. To address these issues, we propose a Transformer-based video interpolation framework that allows content-aware aggregation weights and considers long-range dependencies with the self-attention operations. To avoid the high computational cost of global self-attention, we introduce the concept of local attention into video interpolation and extend it to the spatial-temporal domain. Furthermore, we propose a space-time separation strategy to save memory usage, which also improves performance. In addition, we develop a multi-scale frame synthesis scheme to fully realize the potential of Transformers. Extensive experiments demonstrate the proposed model performs favorably against the state-of-the-art methods both quantitatively and qualitatively on a variety of benchmark datasets. The code and models are released at https://github.com/zhshi0816/Video-Frame-Interpolation-Transformer.

Authors

Shi Z; XU X; Liu X; Chen J; Yang M-H

Volume

00

Pagination

pp. 17461-17470

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

June 24, 2022

DOI

10.1109/cvpr52688.2022.01696

Name of conference

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
View published work (Non-McMaster Users)

Contact the Experts team