Home
Scholarly Works
Single Image Depth Estimation Using Joint...
Conference

Single Image Depth Estimation Using Joint Local-Global Features

Abstract

Inferring scene depth from a single monocular image is an essential component in several computer vision applications such as 3D modeling and robotics. This process is an ill-posed problem. To tackle this challenging problem, previous efforts have been focusing on exploiting only global or local depth aware properties. We propose a model that incorporates both of them to obtain significantly more accurate depth estimates than using either global or local properties alone. Specifically, we formulate single image depth estimation as a $K$ nearest neighbor search problem at both image level and patch level. At each level, a set of rich depth aware features, describing monocular depth cues, is employed in a nearest-neighbor regression model. By comparing the results with and without patch based fusion, the importance of our joint local-global framework becomes clear. The experimental results also demonstrate superior performance compared with existing data-driven approaches in both quantitative and qualitative analyses with a significantly simpler algorithm than others.

Authors

Mohaghegh H; Karimi N; Soroushmehr SMR; Samavi S; Najarian K

Pagination

pp. 727-732

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 1, 2016

DOI

10.1109/icpr.2016.7899721

Name of conference

2016 23rd International Conference on Pattern Recognition (ICPR)
View published work (Non-McMaster Users)

Contact the Experts team