Home
Scholarly Works
Meta-Auxiliary Learning for Future Depth...
Conference

Meta-Auxiliary Learning for Future Depth Prediction in Videos

Abstract

We consider a new problem of future depth prediction in videos. Given a sequence of observed frames in a video, the goal is to predict the depth map of a future frame that has not been observed yet. Depth estimation plays a vital role for scene understanding and decision-making in intelligent systems. Predicting future depth maps can be valuable for autonomous vehicles to anticipate the behaviours of their surrounding objects. Our proposed model for this problem has a two-branch architecture. One branch is for the primary task of future depth prediction. The other branch is for an auxiliary task of image reconstruction. The auxiliary branch can act as a regularization. Inspired by some recent work on test-time adaption, we use the auxiliary task during testing to adapt the model to a specific test video. We also propose a novel meta-auxiliary learning that learns the model specifically for the purpose of effective test-time adaptation. Experimental results demonstrate that our proposed approach outperforms other alternative methods.

Authors

Liu H; Chi Z; Yu Y; Wang Y; Chen J; Tang J

Volume

00

Pagination

pp. 5745-5754

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 7, 2023

DOI

10.1109/wacv56688.2023.00571

Name of conference

2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
View published work (Non-McMaster Users)

Contact the Experts team