Home
Scholarly Works
A new fast approach to nonparametric scene parsing
Journal article

A new fast approach to nonparametric scene parsing

Abstract

Scene parsing is a challenging research area in computer vision. It provides a semantic label for each pixel in image. Most scene parsing approaches are parametric based which need a model that is acquired through a learning stage. In this paper, a new nonparametric approach to scene parsing is proposed which does not require a learning stage. All introduced nonparametric approaches are based on patch correspondence. Our proposed method does not require explicit patch matching which makes it fast and effective. The proposed approach has two parts. In the first part, a new generative approach to transfer semantic labels from a training image to an unlabelled test image is proposed. To do this, a graphical model is constructed over regions of both the training and test images. Then, based on the proposed graphical model, a quadratic convex function is defined on likelihood probability of each region. Cost function is defined such that contextual information and object-level information are both considered. In the second part of our approach, by using the proposed method of transfer knowledge, a new nonparametric scene parsing approach is given. To evaluate the proposed approach, it is applied on the MSRC-21, Stanford background, LMO, and SUN datasets. The obtained results show that our approach outperforms comparable state-of-the-art nonparametric approaches.

Authors

Razzaghi P; Samavi S

Journal

Pattern Recognition Letters, Vol. 42, , pp. 56–64

Publisher

Elsevier

Publication Date

June 1, 2014

DOI

10.1016/j.patrec.2014.01.003

ISSN

0167-8655

Contact the Experts team