Home
Scholarly Works
Detection and location of near-duplicate video...
Conference

Detection and location of near-duplicate video sub-clips by finding dense subgraphs

Abstract

Robust and fast near-duplicate video detection is an important task with many potential applications. Most existing systems focus on the comparison between full copy videos or partial near-duplicate videos. While it is more challenging to find similar content for videos containing multiple near-duplicate segments at random locations with various connections. In this paper, we propose a new graph based method to detect complex near-duplicate video sub-clips. First, we develop a new succinct video descriptor for keyframe match. Then a graph is established to exploit temporal consistency of matched keyframes. The nodes of the graph are the matched frame pairs; the edge weights are computed from the temporal alignment and frame pair similarities. In this way, the validly matched keyframes would form a dense subgraph whose nodes are strongly connected. This graph model also preserves the complex connections of sub-clips. Thus detecting complex near-duplicate sub-clips is transformed to the problem of finding all the dense subgraphs. We employ the optimization method of graph shift to solve this problem due to its robust performance. The experiments are conducted on the dataset with various transformations and complex temporal relations. The results demonstrate the effectiveness and efficiency of the proposed method.

Authors

Chen T; Jiang S; Chu L; Huang Q

Pagination

pp. 1173-1176

Publisher

Association for Computing Machinery (ACM)

Publication Date

November 28, 2011

DOI

10.1145/2072298.2071967

Name of conference

Proceedings of the 19th ACM international conference on Multimedia
View published work (Non-McMaster Users)

Contact the Experts team