Home
Scholarly Works
Aggregation of Rich Depth-Aware Features in a...
Journal article

Aggregation of Rich Depth-Aware Features in a Modified Stacked Generalization Model for Single Image Depth Estimation

Abstract

Estimating scene depth from a single monocular image is a crucial component in computer vision tasks, enabling many further applications such as robot vision, 3-D modeling, and above all, 2-D to 3-D image/video conversion. Since there are an infinite number of possible world scenes, that can produce a unique image, single image depth estimation is a highly challenging task. This paper tackles such an ambiguous problem by using the merits of both global and local information (structures) of a scene. To this end, we formulate single image depth estimation as a regression problem via (on) rich depth related features which describe effective monocular cues. Exploiting the relationship between these image features and depth values is adopted via a learning model which is inspired by modified stacked generalization scheme. The experiments demonstrate competitive results compared with existing data-driven approaches in both quantitative and qualitative analysis with a remarkably simpler approach than previous works.

Authors

Mohaghegh H; Karimi N; Soroushmehr SMR; Samavi S; Najarian K

Journal

IEEE Transactions on Circuits and Systems for Video Technology, Vol. 29, No. 3, pp. 683–697

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

March 1, 2019

DOI

10.1109/tcsvt.2018.2808682

ISSN

1051-8215

Contact the Experts team