Home
Scholarly Works
CROSS-MEDIA TOPIC DETECTION: A MULTI-MODALITY...
Conference

CROSS-MEDIA TOPIC DETECTION: A MULTI-MODALITY FUSION FRAMEWORK

Abstract

Detecting topics from Web data attracts increasing attention in recent years. Most previous works on topic detection mainly focus on the data from single medium, however, the rich and complementary information carried by multiple media can be used to effectively enhance the topic detection performance. In this paper, we propose a flexible data fusion framework to detect topics that simultaneously exist in different mediums. The framework is based on a multi-modality graph (MMG), which is obtained by fusing two single-modality graphs together: a text graph and a visual graph. Each node of MMG represents a multi-modal data and the edge weight between two nodes jointly measures their content and upload-time similarities. Since the data about the same topic often have similar content and are usually uploaded in a similar period of time, they would naturally form a dense (namely, strongly connected) subgraph in MMG. Such dense subgraph is robust to noise and can be efficiently detected by pair-wise clustering methods. The experimental results on single-medium and cross-media datasets demonstrate the flexibility and effectiveness of our method.

Authors

Zhang Y; Li G; Chu L; Wang S; Zhang W; Huang Q

Volume

Active

Pagination

pp. 1-6

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

July 1, 2013

DOI

10.1109/icme.2013.6607487

Name of conference

2013 IEEE International Conference on Multimedia and Expo (ICME)
View published work (Non-McMaster Users)

Contact the Experts team